AI government regulation: why and how | Kaspersky official blog

Credit to Author: Eugene Kaspersky| Date: Thu, 18 May 2023 13:22:44 +0000

I’m a bit tired by now of all the AI news, but I guess I’ll have to put up with it a bit longer, for it’s sure to continue to be talked about non-stop for at least another year or two. Not that AI will then stop developing, of course; it’s just that journalists, bloggers, TikTokers, Tweeters and other talking heads out there will eventually tire of the topic. But for now their zeal is fueled not only by the tech giants, but governments as well: the UK’s planning on introducing three-way AI regulation; China’s put draft AI legislation up for a public debate; the U.S. is calling for “algorithmic accountability“; the EU is discussing but not yet passing draft laws on AI, and so on and so forth. Lots of plans for the future, but, to date, the creation and use of AI systems haven’t been limited in any way whatsoever; however, it looks like that’s going to change soon.

Plainly a debatable matter is, of course, the following: do we need government regulation of AI at all? If we do — why, and what should it look like?

What to regulate

What is artificial intelligence? (No) thanks to marketing departments, the term’s been used for lots of things — from the cutting-edge generative models like GPT-4, to the simplest machine-learning systems, including some that have been around for decades. Remember Т9 on push-button cellphones? Heard about automatic spam and malicious file classification? Do you check out film recommendations on Netflix? All of those familiar technologies are based on machine learning (ML) algorithms, aka “AI”.

Here at Kaspersky, we’ve been using such technologies in our products for close on 20 years, always preferring to modestly refer to them as “machine learning” — if only because “artificial intelligence” seems to call to most everyone’s mind things like talking supercomputers on spaceships and other stuff straight out of science fiction. However, such talking-thinking computers and droids need to be fully capable of human-like thinking — to command artificial general intelligence (AGI) or artificial superintelligence (ASI), yet neither AGI nor ASI have been invented yet, and will hardly be so in the foreseeable future.

Anyway, if all the AI types are measured with the same yardstick and fully regulated, the whole IT industry and many related ones aren’t going to fare well at all. For example, if we (Kaspersky) will ever be required to get the consent from all our training-set “authors”, we, as an information security company, will find ourselves up against the wall. We learn from malware and spam, and feed the knowledge gained into our machine learning, while their authors tend to prefer to withhold their contact data (who knew?!). Moreover, considering that data has been collected and our algorithms have been trained for nearly 20 years now —  quite how far into the past would we be expected to go?

Therefore, it’s essential for lawmakers to listen, not to marketing folks, but to machine-learning/AI industry experts and discuss potential regulation in a specific and focused manner: for example, possibly using multi-function systems trained on large volumes of open data, or high responsibility and risk level decision-making systems.

And new AI applications will necessitate frequent revisions of regulations as they arise.

Why regulate?

To be honest, I don’t believe in a superintelligence-assisted Judgement Day within the next hundred years. But I do believe in a whole bunch of headaches from thoughtless use of the computer black box.

As a reminder to those who haven’t read our articles on both the splendor and misery of machine learning, there are three main issues regarding any AI:

  • It’s not clear just how good the training data used for it were/are.
  • It’s not clear at all what AI has succeeded in “comprehending” out of that stock of data, or how it makes its decisions.
  • And most importantly — the algorithm can be misused by its developers and its users alike.

Thus, anything at all could happen: from malicious misuse of AI, to unthinking compliance with AI decisions. Graphic real-life examples: fatal autopilot errors, deepfakes (1, 2, 3) by now habitual in memes and even the news, a silly error in school teacher contracting, the police apprehending a shoplifter but the wrong one, and a misogynous AI recruiting tool. Besides, any AI can be attacked with the help of custom-made hostile data samples: vehicles can be tricked using stickers, one can extract personal information from GPT-3, and anti-virus or EDR can be deceived too. And by the way, attacks on combat-drone AI described in science fiction don’t appear all that far-fetched any more.

In a nutshell, the use of AI hasn’t given rise to any truly massive problems yet, but there is clearly a lot of potential for them. Therefore, the priorities of regulation should be clear:

  1. Preventing critical infrastructure incidents (factories/ships/power transmission lines/nuclear power plants).
  2. Minimizing physical threats (driverless vehicles, misdiagnosing illnesses).
  3. Minimizing personal damage and business risks (arrests or hirings based on skull measurements, miscalculation of demand/procurements, and so on).

The objective of regulation should be to compel users and AI vendors to take care not to increase the risks of the mentioned negative things happening. And the more serious the risk, the more actively it should be compelled.

There’s another concern often aired regarding AI: the need for observance of moral and ethical norms, and to cater to psychological comfort, so to say. To this end, we see warnings given so folks know that they’re viewing a non-existent (AI-drawn) object or communicating with a robot and not a human, and also notices informing that copyright was respected during AI training, and so on. And why? So lawmakers and AI vendors aren’t targeted by angry mobs! And this is a very real concern in some parts of the world (recall protests against Uber, for instance).

How to regulate

The simplest way to regulate AI would to prohibit everything, but it looks like this approach isn’t on the table yet. And anyway it’s not much easier to prohibit AI than it is computers. Therefore, all reasonable regulation attempts should follow the principle of “the greater the risk, the stricter the requirements”.

The machine-learning models that are used for something rather trivial — like retail buyer recommendations — can go unregulated, but the more sophisticated the model — or the more sensitive the application area — the more drastic can be the requirements for system vendors and users. For example:

  • Submitting a model’s code or training dataset for inspection to regulators or experts.
  • Proving the robustness of a training dataset, including in terms of bias, copyright and so forth.
  • Proving the reasonableness of the AI “output”; for example, its being free of hallucinations.
  • Labelling AI operations and results.
  • Updating a model and training dataset; for example, screening out folks of a given skin color from the source data, or suppressing chemical formulas for explosives in the model’s output.
  • Testing AI for “hostile data”, and updating its behavior as necessary.
  • Controlling who’s using specific AI and why. Denying specific types of use.
  • Training large AI, or that which applies to a particular area, only with the permission of the regulator.
  • Proving that it’s safe to use AI to address a particular problem. This approach is very exotic for IT, but more than familiar to, for example, pharmaceutical companies, aircraft manufacturers and many other industries where safety is paramount. First would come five years of thorough tests, then the regulator’s permission, and only then a product could be released for general use.

The last measure appears excessively strict, but only until you learn about incidents in which AI messed up treatment priorities for acute asthma and pneumonia patients and tried to send them home instead of to an intensive care unit.

The enforcement measures may range from fines for violations of AI rules (along the lines of European penalties for GDPR violations) to licensing of AI-related activities and criminal sanctions for breaches of legislation (as proposed in China).

But what’s the right way?

Below represent my own personal opinions — but they’re based on 30 years of active pursuit of advanced technological development in the cybersecurity industry: from machine learning to “secure-by-design” systems.

First, we do need regulation. Without it, AI will end up resembling highways without traffic rules. Or, more relevantly, resembling the online personal data collection situation in the late 2000s, when nearly everyone would collect all they could lay their hands on. Above all, regulation promotes self-discipline in the market players.

Second, we need to maximize international harmonization and cooperation in regulation — the same way as with technical standards in mobile communications, the internet and so on. Sounds utopian given the modern geopolitical reality, but that doesn’t make it any less desirable.

Third, regulation needn’t be too strict: it would be short-sighted to strangle a dynamic young industry like this one with overregulation. That said, we need a mechanism for frequent revisions of the rules to stay abreast of technology and market developments.

Fourth, the rules, risk levels, and levels of protection measures should be defined in consultation with a great many relevantly-experienced experts.

Fifth, we don’t have to wait ten years. I’ve been banging on about the serious risks inherent in the Internet of Things and about vulnerabilities in industrial equipment for over a decade already, while documents like the EU Cyber Resilience Act first appeared (as drafts!) only last year.

But that’s all for now folks! And well done to those of your who’ve read this to the end — thank you all! And here’s to an interesting – safe – AI-enhanced future!…

https://blog.kaspersky.com/feed/