Political Discussion

I remember an article last year about assault weapons mounted on the back of robot dogs used by the military.

Really scary stuff if you come to think about it.

Which brings up a good point I saw in an article yesterday. Is remote warfare moral?

Should a country send drones and robot dogs to kill targets instead of people?
 
Is remote warfare moral?

Should a country send drones and robot dogs to kill targets instead of people?
I guess it depends what you mean by "remote" and "drones"? (and I'll put aside bigger questions on the morality of warfare, in general)

Current "remote" warfare - that is, airborne assets - is not autonomous and is still controlled by humans (it's a big reason why there was a push w/in the military to stop using the word "drone" and switch to the term "remotely-piloted" - they convey different things).

Things like target vetting and validation are still accomplished. ROEs and LOAC are observed (whether or not one agrees with them as they exist is a different matter). Terminal control is exercised by a human.

So, a person in a cockpit 20K feet up releasing munitions and a person thousands of miles a way on the ground is a matter of degrees of separation. You could argue that all of that is "remote". Heck, a person on the ground operating a rocket launcher that, in some cases, can fire on targets 80+ miles away....I'd argue that is remote in so much that it turns a human target into a feature on a display.

But drones....that is different than remote. Are you asking about a totally autonomous kill chain? AI that is given a set of parameters in which it can engage and kill? Or human control that provides command-and-control and at some point relinquishes execution to a drone? I mean, if you strap some shoulder cannons on a robot dog and give it a swath of land to patrol and engage anything it comes across with no mediation in that process by a human....well, yea that becomes Black Mirror and I'd argue is foundationally flawed. Although, I could imagine someone else taking another view whereas, say you set up a No Man's Land and littered it with these patrolling robodogs. And it was made clear that anyone entering there could be indiscriminately attacked. I'd say there is a current, less sophisticated analog in how we mine fields. Mines don't care what side you're on - they're tools that kill. Robodogs, drones, whatever - they'll be advancements on ways in which we already like to kill one another and I guess we've already answered some of those baseline questions of what's morally acceptable as a species.
 
Last edited:
I guess it depends what you mean by "remote" and "drones"? (and I'll put aside bigger questions on the morality of warfare, in general)

Current "remote" warfare - that is, airborne assets - is not autonomous and is still controlled by humans (it's a big reason why their was a push w/in the military to stop using the word "drone" and switch to the term "remotely-piloted" - they convey different things).

Things like target vetting and validation are still accomplished. ROEs and LOAC are observed (whether or not one agrees with them as they exist is a different matter). Terminal control is exercised by a human.

So, a person in a cockpit 20K feet up releasing munitions and a person thousands of miles a way on the ground is a matter of degrees of separation. You could argue that all of that is "remote". Heck, a person on the ground operating a rocket launcher that, in some cases, can fire on targets 80+ miles away....I'd argue that is remote in so much that it turns a human target into a feature on a display.

But drones....that is different than remote. Are you asking about a totally autonomous kill chain? AI that is given a set of parameters in which it can engage and kill? Or human control that provides command-and-control and at some point relinquishes execution to a drone? I mean, if you strap some shoulder cannons on a robot dog and give it a swath of land to patrol and engage anything it comes across with no mediation in that process by a human....well, yea that becomes Black Mirror and I'd argue is foundationally flawed. Although, I could imagine someone else taking another view whereas, say you set up a No Man's Land and littered it with these patrolling robodogs. And it was made clear that anyone entering there could be indiscriminately attacked. I'd say there is a current, less sophisticated analog in how we mine fields. Mines don't care what side you're on - they're tools that kill. Robodogs, drones, whatever - in ways they'll be advancements on ways in which we already like to kill one another and I guess we've already answered some of those baseline questions of what's morally acceptable as a species.


Im not sure if either remote or fully automated warfare is more moral than the other. I can see problems with both. I think we would have to have some kind of comparison, but the closest we have are the clone wars. And I do t think the Star Wars universe is a good apples to apples comparison to real life.
 
Im not sure if either remote or fully automated warfare is more moral than the other. I can see problems with both. I think we would have to have some kind of comparison, but the closest we have are the clone wars. And I do t think the Star Wars universe is a good apples to apples comparison to real life.
I guess the difference I was trying to get at between remote and automated is decision-making.

So, remote operations, as they exist, have the same level of control as traditional capabilities. They only change where that decision-maker is located (a cockpit overhead vs a seat geographically separated).

But the same target analysis (vetting, validation, nomination, prioritization) and collateral damage estimates are accomplished.

As I see it, remote delivery platforms have no difference in human control than more traditional stand-off munitions. The difference is really just in proximity to what you're killing. Same could be said for a person with a sword vs a rifle with 100m or so difference in proximity vs a short range missile vs a ballistic missile vs a remote aircraft. But in all these instances there is a human factor. Again, this isn't addressing or changing any foundational questions about how we wage war and kill people.

Automation would upend those critical human decisions (and for worse or maybe better depending on how you see human ability to judge life and death).
 
Last edited:
I guess the difference I was trying to get at between remote and automated is decision-making.

So, remote operations, as they exist, have the same level of control as traditional capabilities. They only change where that decision-maker is located (a cockpit overhead vs a seat geographically separated).

But the same target analysis (vetting, validation, nomination, prioritization) and collateral damage estimates are accomplished.

As I see it, remote delivery platforms have no difference in human control than more traditional stand-off munitions. The difference is really just in proximity to what you're killing. Same could be said for a person with a sword vs a rifle with 100m or so difference in proximity vs a shirt range missile vs a ballistic missile vs a remote aircraft. But in all these instances there is a human factor. Again, this isn't addressing or changing any foundational questions about how we wage war and kill people.

Automation would upend those critical human decisions (and for worse or maybe better on how you see human ability to judge life and death).


This was exactly my first though as well, But then I thought about how automation has happens in policing. All the algorithms are written my humans and those algorithms are affected by human biases. Would we see that if warfare was fully automated? Maybe.
 
These dogs are typically carrying laser scanners or mechanical system layout tools. I recently saw a wall layout bot that was more like wall-e without arms that literally drew the wall plans onto the slab. Basically a rolling plotter at full scale print size.


I mean…do they have one that can do all my chores.
 
This was exactly my first though as well, But then I thought about how automation has happens in policing. All the algorithms are written my humans and those algorithms are affected by human biases. Would we see that if warfare was fully automated? Maybe.
And, I think that's where I see the line, as well. Yes, AI is/will be programmed by humans but there is a hands-off set it and forget it that then removes human decision making. Even if a human programs for binary pathways or actions based on conditionals, those human capabilities abandon control early in the chain.

A human programs self-driving cars. They still crash when perhaps a human driver would not.

Or, since you mentioned policing....funny enough I was watching THX 1138 last night. So, if we had some policing androids and they were programmed by humans to apprehend human X when they do Y with Z amount of force, then all they would be directed to do would be to follow if-then commands based on detection triggers. But what would/could a human do differently (ideally, that is, 'cause let's be honest about our indefatigable ability to not rise to our humanity)? Perhaps have a better sense of context. So, yes maybe perpetrator X is doing Y but maybe they're having a mental episode that is a variable not considered in programming (again, human policing often fails at this...). I guess, empathy is the missing, unprogrammable thing I'm reaching for?
 
Last edited:
And, I think that's where I see the line, as well. Yes, AI is/will be programmed by humans but there is a hands-off set it and forget it that then removes human decision making. Even if a human programs for binary pathways or actions based on conditionals, those human capabilities abandon control early in the chain.

A human programs self-driving cars. They still crash when perhaps a human driver would not.

Or, since you mentioned policing....funny enough I was watching THX 1138 last night. So, if we had some policing androids and they were programmed by humans to apprehend human X when they do Y with Z amount of force, then all they would be directed to do would be to follow if-then commands based on detection triggers. But what would/could a human do differently (ideally, that is, 'cause let's be honest about our indefatigable ability to not rise to our humanity)? Perhaps have a better sense of context. So, yes maybe perpetrator X is doing Y but maybe they're having a mental episode that is a variable not considered in programming (again, human policing often fails at this...). I guess, empathy is the missing, unprogrammable thing I'm reaching for?
Asimov gave us a great framework with his laws of robotics. A robot may not harm a human or allow a human to come to harm through inaction. A robot must obey the order of a human unless it conflicts with the first law. I realize that we need to come up with a definition of harm for robots which involves a process that can include errors-much like when AI cars were hitting cats, especially at night. It was because no one had loaded an image of a cat running in the road at night. The AI had no reference and thus was not properly identifying cats in the road. So I respect that coding this will involve a fallible process, but before we go messing around with robot dogs with lasers, we should set some ethical ground rules.
Now there is a complex question!
Conflict arises when compassion fails. War often arises when greed wins on one side or another, or both. One can fight against greed and “be on the side of right”, but is engaging in conflict of any kind moral, given that war, even for the right reasons, is fundamentally wrong and hurtful?
 
Conflict arises when compassion fails. War often arises when greed wins on one side or another, or both. One can fight against greed and “be on the side of right”, but is engaging in conflict of any kind moral, given that war, even for the right reasons, is fundamentally wrong and hurtful?

Agreed but as a fundamentally flawed species conflict is inevitable. In the face of greed is allowing it to prevail less moral than fighting it?

Now having a large and extravagantly funded armed forces, backed up by a military industrial complex, capable of waging multiple wars in various global venues? Totally immoral.
 
Well there are only 4 countries that this could apply to and one of them can’t really afford to do it properly and still thinks it’s as important as it was in the 19th century.
CANADIANS!!!

Those fucking bullwinkle-riding syrup bags!

Y'all worried about robo dogs. Just wait till robotic moose are shooting antler lasers at satellites. Then my weirdly intense hatred of Canadians won't be so inappropriate and xenophobic will it?!!! #DefundTheCanadians
 
The price of housing has skyrocketed in Massachusetts over the past few decades and has even continued during the pandemic because people are moving from Boston to the suburbs.

The building of apartments is a almost non existent in posh suburb towns and the towns going on a building boom, like Salem isn't building anything that could be remotely considered affordable. In any of the new apartment complex that I have seen go up since moving to Salem, there hasn't been a single one I could afford a 1 bedroom apartment in.

And here is why that is happening. In Massachusetts as I have mentioned before in this thread, a super majority vote of the board is needed to approve zoning for the construction of any apartment complex as required by Massachusetts state law. The state law also requires a supermajority to change.

Thus a very loud and vocal minority are essentially able to block everything.

The vocal few don't want to change community dynamics, are afraid crime will come to their community and are worried about reductions to their property values.

In the case of posh suburb towns that only have single family homes, they don't want to bring in apartments because that will bring in that "riff raff".

In the case of communities like Salem that have tons of apartments, only ultra high end luxury apartment complex are being put up. Anything that is billed as affordable or working class gets blocked because the community is worried about increased crime and changing community dynamics.

In both scenarios, a reduction in property values is a big concern.

Governor Baker has a plan to get these towns to build apartments. It's pretty much telling them you have to build apartments or pay up kind of shake down.


This is virtually all of Eastern Massachusetts / the Boston suburbs included in this new public transit zoning. The entire MBTA area.
What governor baker is telling towns is you need to build X amount of apartment complex or you need to pay the MBTA more money and you may lose state grants.

This is something that does not sit well with people in posh communities who wouldn't ever be caught dead using public transportation. There is a lot a screaming and finger pointing going on and protests outside of Governor Baker's mansion over this already.

Many of these communities think it's an overreach of the state to take more of their money if they don't build apartments.


Could this idea actually work? Or will these communities just pay higher taxes to avoid building apartments. Or will the building trend in Salem continue where anything that does get built is not affordable and does nothing to solve the "affordability crisis" the state is facing.

What is everyones thoughts on this?


In addition to all this, apparently there is a ban on 55+ only communities. The state will not allow cities to approve zoning for developers to build communities that are age discriminatory.

In many of the communities that have no apartments, what developers are building are gated communities of McMansions for people 55+ only. These communities don't add much housing to the state, exclude younger people from being able to own and contribute to the rising property values.

One of the things I have heard discussed is if the state only ends up withholding grants from these communities, it doesn't give much incentive to the affluent communities to change anything. They will likely continue on as is which would only result in increase burden for poorer communities to comply.
 
AOC saying things that everyone needs to hear and understand:

U.S. Rep. Alexandria Ocasio-Cortez said capitalism is 'not a redeemable system' for Americans and represents a 'pursuit of profit' with disregard for any human, environment and social consequences run by an elite minority.


Now she also was at that Met ball unmasked and all, but...

Maybe championing a system where the only economic gains a person makes are through exploitation of others isn't the best way to run a planet that is full of humans.
 
Maybe championing a system where the only economic gains a person makes are through exploitation of others isn't the best way to run a planet that is full of humans.

Oh for sure it's not. But if you think otherwise you are labeled a socialist/communist extremist and are a quack job.

The powers to be don't want you to believe that this is true and will do their damndest to continue to exploit others for profit. It's not their problem to worry about the planet, because they will be rich and dead before they it harms them.
 
Back
Top