so before we get into the heavier stuff...have ya'll heard the new Drake jam?! it has everything you need for summer: some Big Freedia (YOU KNOW I LOVE BOUNCE), a great beat, some female positivity....I've had it on repeat for weeks (luckily my husband loves Drake so it's all good).
OK. now let's talk geopolitics. maybe not what you'd normally expect from me on this blog. hey, what can I say? I'm multi-dimensional!
My research has always focused heavily on automation and how we as people interact with said technology. There are a lot of dimensions to that topic, but the two I focus on the most are 1. how do we design automated or autonomous environments to complement and utilize the uniqueness of humanity (I'll explain more in a sec) and 2. How do we maintain safety and security while creating and integrating these systems into society?
All right... autonomous cars are cool, right? They'd probably be very efficient and way more streamlined and likely more safe than the average human driver(s) - less human error, like drunk driving. And these are good things! But in order for these machines of wondrous technological advances to operate, they'll need to communicate (likely wirelessly) with traffic signals and stop signs and speed limit signs. And they're computers so they will fail and we don't really know how they'll fail (will the automation just suddenly stop working and ask the passenger to immediately jump in? Or will there be some kind of slow, graceful failure? Will it always be the same?). Also please keep in mind that the same person who programmed your smartphone is also programming these cars (no shade being thrown - just saying, they are human beings who error and just as it is guaranteed that your phone will freeze or need to be restarted at the most inopportune times...so will your car).
So just thinking about the two things I described there : the connectivity to the infrastructure grid and the unknown mode and rate of failure in the cars....Can you imagine scenarios that are less than ideal for personal, community and national security?
Of course, you can. Just as Russians have been found sneaking around in our power infrastructure over the last few years, it would be almost promised that someone would hack our autonomous driving infrastructure. Until someone creates an unhackable computer (lol)... Imagine the chaos that could be caused by tricking a car into thinking a stop sign is a speed limit sign and sending your vehicle hurtling through a 4-way stop-sign intersection without a care in the world. And if they can hack the stop lights, they can hack the cars too. Vehicles driving around under the influence of someone with bad intentions. And would the vehicle even know it's been compromised? Not a great picture for national security.
That scenario actually CAN be challenging for some people to imagine. I sometimes get pushback from people about how airplanes have autopilot, why can't cars?! They don't get hacked! Well, that's apples and oranges, and for another time..but here's something a little easier to conceptualize.
Imagine your chilling in your autonomous car, maybe reading The Economist and promptly falling asleep after the first 2 articles (love the Economist but that shit is DENSE), when suddenly the automation in your vehicle fails...and YOU have to jump in and take over. Nevermind that you were ASLEEP...even if you were super awake and really riveted by the Economist..this is a tall order. To gain awareness of the traffic around you, the conditions of the car, what you need to do to avoid an accident.... There is truly no research that suggests we are capable of doing this (what we call "human take-over") in a safe amount of time and in a safe way. Sure you could grab the wheel..but you likely wouldn't be able to make safe, informed maneuvers with any level of consistency in that instant of time.
Earlier, I mentioned the uniqueness of humanity. What does that mean? Humans are kind of dumb...we do dumb stuff all the time (hello..tide pods?!). However, people are AMAZING at improvising. AMAZING. We can approach situations that we've NEVER SEEN BEFORE or even imagined...and we can make instantaneous judgments and devise a reasonable and safe course of action to get through that situation.
So what does it all mean? The last thing I want you to imagine is...I want you to imagine a proliferation of autonomous weapons - like military-grade weapons. And think about how these issues - hacking/cybersecurity and the inability for humans to get in the loop and take over in the event of an error - and I'm guessing now you're going...ahh..geopolitics. (if you want to read more about this - my friend Paul Scharre has a really good book about it that it's actually NOT boring to read.)
This is a big, serious question. And not one of future fodder - like oh that's years away! It's not. It's here and now. If we have them, or our enemies have them (which is scary)...what does that mean for global security? How do we regulate these weapons? Do we ban them, similar to denuclearization? Do we impart standards of design to ensure that human flexibility is accounted for? Does that make the automation useless? and finally, how does the development of autonomous systems (like cars and weapons) factor into American leadership? We don't employ industrial policy in the US, where the leadership in the development of such systems is a national priority supported and enforced by public policy. However, China does. So how does that change or threaten our "power" and influence position, globally?
Sometimes, when we hear about AI or automation, it's easy to blow it off. But the reality of it all is that it DOES impact YOUR life. AI/automation is coming, it is inevitable, but regulation and safety measures are possible. Keep an eye out on the news. Press your representatives to think about this problem and how to legislate it, and when scientists warn of future issues... it's not fake news. And don't let anyone call it so.
and listen to the new Drake song!