Author, Cybersecurity for Dummies
Cybersecurity is a hot topic given the sustained increase in cyber-attacks and the ongoing arms race between bad actors and the “good guys”. Are there any emerging issues that deserve to be talked about that aren’t getting mainstream attention?
One issue that is not receiving enough attention is that of cyberattacks evolving from being performed by humans against other humans through the use of technology, to computers attacking computers, with little human involvement. Another important AI and cybersecurity concern is that AI systems can be hacked by feeding them bad data from which to learn – the systems themselves do not need to be compromised to render them impotent or worse.
How can we ensure these systems and technologies remain unbiased if they are written by people and therefore inherently contain their implicit biases?
There are 2 separate issues:
The first is to prevent the AI from making incorrect observations and decisions because it suffers from a lack of complete data, or biased data, for the matters which it is supposed to address. Ensuring that systems are created and tested by diverse teams, as well as that during learning phases systems are fed sufficiently diverse data, can help address such concerns.
The second issue is that AIs may make decisions that society views as inappropriately biased, but which the AI considers being both correct and integral to achieving maximum performance. Addressing such “bias” is a complex ethical matter.
Is it simple enough to say diverse teams can negate the biases spoken about above?
Diversity is not just a matter of gender, race, religion, etc. – depending on the system, it may also be a matter of diversity of opinions, styles of dress, languages spoken, hair color, weight, etc. Because each person is unique in countless ways, there is no possible way to achieve a perfect level of diversity when it comes to people.
Are you at all concerned about AI making decisions that affect people? Why or why not?
Yes. There need to be checks and balances. There was a situation not that long ago in which an African American man was arrested and falsely accused of a crime based on a faulty facial recognition system identification match. While identification technologies are amazing tools that can help keep criminals off the street – the decision to arrest someone based solely on the AI system’s recommendation was highly inappropriate and perhaps even illegal; tools have to be used properly, and without undermining civil rights.
Within the broader context of cybersecurity and ethics, is there any controversy around implied and expected responses to cyber-attacks powered by AI?
One of the questions that keep surfacing is whether or not we should allow parties who are under attack to “hack back” – while that has been an issue in our present human vs human era, it will become a much more high profile ethical and legal issue as AIs take over both offensive and defensive cybersecurity roles.