Artificial Intelligence

Hector Santana
4 min readMay 3, 2024

--

Hey, What Can Go Wrong?

The development of AI soldiers is being pursued by nations around the world.

Robots that write letters for you, prepare legal briefs, or build your cars and computers. Artificial Intelligence (AI) is popping up everywhere. Yes, I saw one in the supermarket asking folks if they needed help. Many believe the technology will bring down the costs of doing things and result in a more efficient world. That may be accurate but there are a lot of unanswered questions.

Imagine robots writing computer code and solving complex scientific problems. Imagine AI being used to monitor defense systems, satellites, or to run our 911 call centers. In a world of convenience, we may be convinced that AI will improve our lives. But what you're not imagining is how every advancement in AI will lead to more reliance on those type of systems. Eventually there will be nothing that is off limits.

Artificial intelligence soldiers, drones, ships and aircraft are only a matter of time. The US Department of Defense and countries all over the world are already funding programs meant to create artificial intelligence hardware designed for war. This technology will save lives they will say. Many of us will buy into it. Just look at the use of war drones. The advancement of drones has already taken warfare around the world to new levels of complexity. Now, even the most ill-equipped guerrilla force can destroy targets using commercial drone technology. Just think of what they will be able to do with an AI equipped drone. A drone that will know more about targeting, weather conditions and global positioning. It is a dangerous proposition.

China is a leading proponent of artificial intelligence. Attracting half of the worlds AI companies to its shores.

What can go wrong, you ask? Many generative artificial intelligence applications have experienced a phenomenon known as hallucination. A process whereby the system is able to manufacture some of its conclusions, in effect creating a lie. Open AI’s Chatbot GPT experienced this anomaly in 2022 when one of its bots was asked questions and its responses made international headlines. The system made up erroneous assertions about the James Webb Space Telescope. In another case, a lawyer using ChatGPT had the application make up legal cases for a briefing in a court case in Manhattan. He turned in his briefing to the court only to be skewered when it was learned the court cases cited never existed. Hallucinations caused it to make up responses. Its programmers scrambled to reprogram the Chatbot to answer questions without inference.

However, they are not alone in experiencing this phenomenon. Google’s Bard and Microsoft’s Bing had serious issues, providing false narratives and information in response to a number of questions on a range of topics. Hallucination is a byproduct of AI systems that is clearly a threat when considering the potential application of AI in certain industries like defense and aerospace. Remember the robot writing code, what will happen if that robot hallucinates and writes its own code of instructions, bypassing what humans have programmed it to do? The thought is frightening.

How do we deal with this emerging technology? Vectara, a company intent on tracking chatbot lies is on the case. Revealing lies when they find them and researching why hallucination is a thing. But is tracking lies or misinformation the solution? Many do not think so, because the lurking dangers remain the same and there is nothing to discourage unscrupulous developers from building something insidiously dangerous. Tracking lies only reveal that it is a flawed technology and that is enough for most people to be turned off by the unknown prospects of AI.

Many industries support the use and expansion of artificial intelligence, and they are driving its development.

To date, there has been little effort to curb our appetite for AI. Congress has barely addressed the topic. But they did authorize the use of AI in their legislative chambers, only for writing, research, and data collection. Yes, they put guardrails on its use but the fact that they authorized it in the first place sets a terrible precedent.

The optics look bad particularly when they say they are concerned about bad actors using AI for unscrupulous things. What can you do? Tell your legislators to restrict the use of AI until we know more about how to stop hallucination in these machines. Tell them to strictly regulate its use and to ban AI for law enforcement use and in the seats of power in the US. AI can be useful but its widespread application in a range of sources like defense, satellite technology, and missile defense present clear challenges.

Remember this good robot, that's not what the defense department is looking for in AI.

This technology must be guarded until we know more about it and can trust its use in a wide range of applications. Unfortunately, we must fight now to prevent what can happen later. Limiting its use today may be the smartest or the dumbest thing we do, depending on who you ask. However, what we do know is that the technology is not ready for prime time. Better to wait until it is…

--

--

Hector Santana
Hector Santana

Written by Hector Santana

*Top Writer-Camping and Survival. I love to write about the great outdoors, survival and politics. An avid outdoorsman and part time survival instructor.

No responses yet