Skip navigation

Category Archives: Risks

Source: A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning | WIRED

Source: AGI Ruin: A List of Lethalities — LessWrong

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how. Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

Source: The Only Way to Deal With the Threat From AI? Shut It Down | TIME

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

Nick Bostrom

Source: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute | Technology | The Guardian

Source: Inside Ukraine’s Killer-Drone Startup Industry | WIRED

Source: Ukraine Rolls Out Target-Seeking Terminator Drones

“I am not stupid, you know,” says Sarah Connor in the original 1984 Terminator movie, refusing to believe that a killer robot is after her. “They cannot make things like that yet.”

“Not yet,” Kyle Reese tells her. “Not for about forty years.”

Forty years is now up. Not only do autonomous weapons exist, they can be built at a kitchen table with components bought online. And as Sternenko suggest, they are rapidly improving. It will take some time for the world to fully comes to terms with what this means.

My thoughts – target discrimination, IFF, how far will this go? Ukraine is desperate, how far will they go?

Source: Ukrainian Wild Hornets Co-Founder Talks About The Future Of Drone Wars

Need to look more into their “AI targeting”. What will prevent drones from getting turned around or mis-deployed and automatically attacking friendly targets? See the other article

Source: Drone Warfare’s Terrifying AI-Enabled Next Step Is Imminent

https://www.businessinsider.com/meta-headset-inception-attacks-trap-users-fake-environment-study-2024-3

Source: Superfluous people vs AI: what the jobs revolution might look like