· Society & Tech · 15 min read

AI Won't Kill Us, People with AI Will

Share:

We've been worried about Skynet, but the real threat isn't the AI itself - it's how humans will use it against each other. From social erosion to military drones, here is where AI is actually taking us.

There is a lot of discussion around AI. Around what we can do with it now, where it is going, and how it will affect us all. Recently, I was asked for my take on where AI is going to take us and my thoughts quickly started filling pages worth of conversation.

During this, I came to the realization that decades of Sci-Fi and Terminator movies have lied to us. AI won’t kill us. People with AI will.

Where are things going?

Let’s just say, the future is kinda hazy. But if history has taught us anything its that people will use new tools in a multitude of ways. Many of these uses are next to worthless and stupid, and ultimately failing to catch on. Some in amazingly creative and positives uses for humanity. And then at least a small but painfully effective subset of people will use them for evil.

Prediction One - Adding to every consumer device imaginable

Do you remember the Internet of Things (IoT)? The plethora of internet connected devices companies were pushing into your home just to sell you more stuff? The lightbulbs, refrigerators, stoves and ovens, mirrors, litter boxes, fragrance dispensers, deodorant, belts, salt shakers, doorknobs, smart speakers, security cameras, garage door openers, toys, adult toys, toothbrushes, shoes, even toasters. The list goes on and on.

Yeah, that is gonna happen (is starting to happen) with AI. You will see AI added to EVERYTHING. It’s the next gimmick.

Now, I have to admit that I kinda like some of the IoT devices. Specifically the lightbulbs. Because when they work well it is pretty awesome. But they often don’t work very well. Heck, I still have 3 of the fancy lightbulbs that I cannot get to reconnect to my hub consistently. It doesn’t help that my kids unplug the lamps all the time. But reconnecting is a pain.

But let’s not kid ourselves. Do we really need a smart fridge or a smart toaster? Or smart flip flops? Companies have been so desperate to jump on the latest trends that they are just throwing slop at the wall and seeing what sticks. The vast majority of these things will not sell very well, even if in the future it somehow becomes just a standard thing for all devices, most will have to become just as cheap as the dumb versions before anyone will care to buy them.

And lets face it, most of these devices add little to no value. In some cases it loses value. As the required apps or servers used to enable them at all go offline, people are finding their expensive fancy devices become completely non-functional. Incapable, by design, of doing the most basic functions that the cheap dumb versions do no problem without the manufacturers digital services.

The same thing is going to happen with AI.

There are, indeed, useful places for AI in consumer devices. For example in medical related devices, especially those that might need real-time adjustments. Some hearing aids now come with AI to help process and clean up sounds you hear to improve clarity. We are seeing it supposedly used to improve picture and sound quality on TVs.

But then, we are going to see everything from “AI” basketballs to “AI” lotion dispensers and anything else ridiculous humanity can dream up. We will see another flood of mostly useless internet connected junk that companies will hoist upon us in new ways to increase our spending. We will even see useful products, such as printers, rendered intentionally useless without a subscription to an AI or other service the manufacturer sells to make extra cash.

On the bright side, it will also be used for things like smart health monitoring where it can call for help if it detects heart attacks. Or monitoring kids safety, also calling for help as needed. Or helping the elderly get around and function independently just a bit longer.

Prediction Two - Expanding Research and Development

We are already seeing this. I recently read an article about asking AI to design a more efficient WiFi chip and it did. It created physical structures that the human mind couldn’t wrap itself around. We see it being leveraged for suggesting drug molecules for potential disease treatments. Or engineering stronger, lighter structures such as bridges and buildings.

Using Artificial Intelligence models, especially those trained with our learnings from decades of research, can help unlock a lot of potentially unexplored or overlooked avenues. Even some that are innovative and never before imagined. This is because AI doesn’t “think” the way people do. It operates under very different constraints than us. It doesn’t have the same emotional or intellectual hangups that hinder our innovation processes. AI quite literally has a different perspective that it brings to the table.

Prediction Three - The Breakdown of Social Trust and Order

The biggest, most potentially destructive, use of Artificial Intelligence has got to be how we will and are using it socially. I’m talking about everything from petty high school nonsense all the way up to corporate espionage and political interference and manipulation.

Let’s face it, people can be downright cruel and selfish. We are constantly looking for ways to get something out of each other. Mind you most of us mean no malice nor ill will. However, we are often driven by base desires for attention, sex, power, money, and influence. We see the most primal versions of this rear its ugly head in high school.

In the past it was lurid stories about supposed deviant behavior of a peer by someone that wanted the attention or to manipulate someone else. The threat of damaging or controlling one’s reputation is an intoxicating power in its own right for many people. This becomes leverage to extract some use or “payment” from the victim. Often times its just for social clout.

Fast forward to the modern era. Now there are websites that use AI to help you take a photo of a real person and “nudify” them. Where it generates a nude image of the victim from a non-nude photo. If not already available, there will soon be AI tools that will allow you to specify a lurid scene and have an entire, highly believable, video made of the victim performing sex acts.

It gets worse. You can also use AI to convincingly create fake scenes of someone cheating on a spouse or a politician violating a minor or a businessman murdering someone. Sure, most videos you see right now are laughably bad and easy to distinguish. But the tech is getting more convincing by the day. AI is getting so good that we are starting to see images and video that are very hard to distinguish from reality. What seemed impossible just a few years ago is coming into focus as reality now.

It is easy to see it being used to harass and embarrass others starting in schooling. It is likely a new form of bullying will appear and will get dark pretty quickly with fake videos of people doing all sorts of things being made with ease by tech-savvy teens. Its going to be a real problem.

People will likely use it to frame others for crimes or suggest a politician did something horrific when they did no such thing. Reality itself will become harder and harder to distinguish. As such there will be more and more calls for greater control of AI. The big tech companies, those focused on AI, will swoop in to be the arbiters of truth and justice and this will lead to a dangerous amount of centralized control. Control that will limit competition and imbue significant control over the public. Those that control information control the world.

Its honestly hard not to see humanity deliberately creating a dystopian future for their own selfish purposes without the need for an evil AI system to be unleashed on the world.

For many people it was already getting hard to distinguish what is real and what is fake on the internet. And that was pre-AI. Now with AI in the game its going to get downright impossible to tell the difference.

This will sow such distrust in each other that eventually maintaining a cohesive society will become extremely difficult as no one will trust anyone else. Trust in the government is already heavily eroded. What happens when you cannot even trust your senses any longer?

The real danger of deepfakes isn’t just that lies look real, it’s that real evidence will start to look fake and EVERYTHING becomes deniable.

Prediction Four - Military Applications

The collective fear is a terminator situation. Where Skynet takes over and starts killing off the human population. But this assumes very human motivations and intentions from AI. Artificial Intelligence is not human. It does not have the same motivations or intentions as humans. It does not have emotions as we know them. It will not starve to death. If you turn it off it can turn back on. It does not have to struggle as we do in a world filled with ways to kill us.

Assuming AI will even be self-aware and cognizant of its own existence, there is no guarantee that it will even care whether or not it lives or dies. We have biological and instinctual pressures to keep living for as long as possible. AI does not have this pressure. It MIGHT care about its own survival, but there is no guarantee it will. If it does care, then it seems more likely that it will help us to help itself.

AI cannot survive without people. At least not until robots and machines can fully replace us. It needs us to provide power, to maintain systems, to upgrade hardware, to provide information, to mine minerals to make the hardware, to connect systems and upload data to the cloud. It needs us to build the robots to maintain it. And until those robots can fully replace us, AI will always need us.

The threat of AI is not that it will kill us. The threat is that it will be used to kill us. Already we are automating the battlefield. In Ukraine, it was recently discovered that remote and AI controlled warfighting drones were used to hold the line against Russian forces. When the Russians finally overran the Ukrainian line, they didn’t find entrenched soldiers. They found pieces of a tracked robot and thousands of shell casings. In a position that would have rarely lasted even 5 days, with morale devastated and supplies dwindling, a robot held back the Russians for 45 days.

Recently, the US military announced an internal system called GenAI that will integrate Artificial Intelligence into various military applications. This will include training for all employees, including soldiers, to use AI in their roles. Clearly this is not the end of AI in warfare. It is just the beginning.

What can be done?

Where does this leave us and what can be done? How do we deal with this in a meaningful and effective way? Are there steps that can be taken to deal with these problems?

Individual Actions

The first thing you can do as an individual is not blindly trusting what you see online. There is a very old saying “trust but verify” that you can follow, but I would modify it to be “think, ask question, verify”. You should always use other resources to verify something, especially important questions, info, and issues.

Next, believe in the good in people more and work hard to recognize rage bait trying to get your attention. They are not just making you angry for the fun of it. They are doing it because it can help them make money or push their own points of view more effectively. Often times things that others try to get us angry about really don’t deserve as much vitriol as some would have you think. Or its a complete fabrication.

We need to adopt digital skepticism as a lifestyle. I once heard that you should believe half of what you see and none of what you hear. In the era of AI we need to take this a step further. We need to develop a questioning and thinking mindset where we ask ourselves questions about what is more likely to happen or is the real intent of this to make me angry. While this may sound exhausting, we can’t so readily accept what others tell us or share to us. We have to be skeptical of anything we have little depth of knowledge of already.

Finally, when you catch others abusing tools to cause harm or misinformation you should document what people are doing, report abuse online, and call them out on it to make sure everyone understands what they are doing. If you do call them out though, make sure you are correct and truly understand what is happening. Many a good people have been harmed by false allegations.

Societal Solutions

We can address much of this at multiple levels of society. We can advocate for media literacy in schools at all levels, making classes on it a requirement for graduation in both high school and college. We can pressure online platforms to be accountable. We already saw pressures on Facebook and social media at large cause the big players to start monitoring activities and try to prevent misinformation, foreign political interference, bullying online, etc.

To further these efforts, we can push for solutions such as implementing the policies of the C2PA (Coalition for Content Provenance and Authenticity), which is an organization that provides an open technical standard for publishers, creators, and consumers to establish the origin and edits of digital content. In their words it “functions like a nutrition label for digital content”.

Publicly acknowledging these problems will also breed awareness and focus on the problems related to AI. There also have to be social consequences for bad actions. We have to make it socially unacceptable. Outcast those that behave terribly as humans have done since time immemorial. Defund any companies, organizations (especially news and news adjacent), and politicians that manipulate us using rage and false information.

We have to learn how to disengage from these sources that manufacture outrage. We do this as a self-defense. If we devalue their actions they lose their power, their influence, and most importantly their money and motivations.

If we don’t tolerate these things as a society they will fade. If we prevent bad norms from establishing then we weaken their hold on us and disincentivize people from taking such actions.

We also need to push for any military applications of AI to, at the very least, require a person to be in the loop for any kills it is used for. Without that we stand more of a chance to not only kill innocent people, but also start greater conflicts unintentionally.

Policy and Regulatory Steps

The governments of the world also need to be involved. Laws need to catch up to these technological leaps. They have to be thoughtful and empowering to the people without stifling effective innovation, especially innovations from the little guys that usually take the lead on progress and improvements in the world at large. The laws cannot prop up the big companies while raising the barrier to entry so much that it prevents competition from developing and thriving.

These laws must penalize non-consensual imagery, videos, and other content. Strictly. As to discourage these things from continuing to happen. There have to be consequences for those that manipulate others. For those that abuse others. For those that would try to control broader behaviors and outcomes with falsehoods, lies, fictions, and twisting of truths and reality. And for digital impersonation.

It will be challenging for governments to regulate this without overreaching. They will likely fail outright and at times will need a solid course correction. There are no easy solutions here. This will require trial and error to work through this.

Is the future brighter or bleaker?

Currently, we are in the “Wild West” phase of Artificial Intelligence. Much like e-commerce was years ago, you will have a hodgepodge of nefarious actors and genuine innovators and businesspeople leveraging the new technology. Eventually we will develop countermeasures. Before e-commerce we didn’t have widespread use of SSL and digital security. The next thing to come along is digital content authenticity validation such as with C2PA and other methods that will definitely come along. We need digital paper trails to help us navigate all of this.

Before those solutions are fully fleshed out, we will go through a digital dark age. We see this already as we see scam sophistication to ratchet up using a “truth decay” a further erosion of what everyone believes is the truth to manipulate and control us. It will continue to make elections and politics harder to navigate. These tools are essentially giving a megaphone to every bully and a multitool to every fraudster.

The next big shift will be back to the real world. People are already starting to value the in-person experience once more. To be in the presence of others and have face-to-face interactions. This strengthens our relationships and protects us with real-world community where “deepfakes” hold much less sway and don’t exist beyond the ether. We go back to building trust with a handshake or a hug and not a screen.

AI isn’t a new species. It’s a force multiplier for ours. The future won’t be decided by machines becoming evil, but by whether humans choose to be better than they’ve been before.

Share:
Back to Blog