Law on artificial intelligence: rigid framework for neural networks
Recently, the European Parliament considered issues regulation of artificial intelligence. The draft law on artificial intelligence (Artificial Intelligence Act, AI Act) was adopted by an overwhelming majority, which, according to Roberta Mezoli, head of the European Parliament, should "set the global standard" for the future.
In addition, the European Parliament has filed an antitrust lawsuit against Google, which is seen as a major step towards regulating AI technologies.
What is the value AI Act?
The main goal of the Artificial Intelligence Act is to create a safe environment for the use and development of artificial intelligence, which guarantees users the protection of their rights. According to parliamentarians, the conditions for this should provide certain restrictions for those AI systems that can affect the subconscious of people.
Some AI programs may be banned outright if the risk of their use is determined to be "unacceptable". With regard to “high risk” technologies, they plan to introduce significant restrictions and establish new requirements for the transparency of such systems.
The law will come into force no earlier than 2025. The final version has not yet been released, but since the draft text has been made public, it is possible to analyze what the new draft rules on the use of AI have to offer.
Restriction associated withe motion recognition
They plan to introduce a ban on the use of programs for recognition of emotions during inspections (by police, border guards, migration service, etc.). As well as AI tracking the emotional state of company employees, students, pupils (in the workplace, in educational institutions, etc.).
Developers of AI programs for emotion recognition, of course, do not like this prospect. They insist on the benefits of their software, arguing that, for example, by the expression on the driver's face, you can determine in time that he falls asleep at the wheel of a vehicle. In the same way, you can identify students, students who find it difficult to learn.
Limiting real-time biometric tracking of citizens
Experts predict the most fierce debate among legislators regarding this ban. And it is clear why: in France, for example, they are lobbying for an increase in the use of digital facial recognition tools.
Representatives of law enforcement agencies of EU member states are against a complete ban on remote biometric identification. They argue their position by the fact that the use of such technology facilitates the search of missing children, identifying criminals, preventing terrorist threats.
The problem is indeed multifaceted, so it is important that the choice of AI systems and their use is strictly regulated.
Social Tracking Restriction behavior
Scheduled introduction of restrictions on the use of social scoring of citizens in the formation of their generalized data-portrait. Now information social behavior is actively used by various institutions when deciding on: issuing mortgage loans (banks), hiring employees (employers), choosing insurance rates (insurance companies), etc.
Europeans want to protect themselves as much as possible from the bitter experience of countries with authoritarian regimes. A striking example is China, with its infamous social rating system that provides total control of individuals through the collection and processing of their personal data. For example, citizens with low rating it is difficult to find a good job, get a loan and even buy transport tickets.
Restriction of use copyrighted material
Larger neural network language models (such as GPT-4 OpenAI) are likely to be banned from collecting copyrighted data. AI research lab OpenAI has already come under the spotlight of European legislators due to intellectual property infringements.
The new law speaks of the need for marking copyright content, which must be preserved even after its modification. Tech industry lobbyists have a lot of complaints about potential innovation, so they are trying to resist the appearance of such a ban.
Limitation of social media recommendation algorithms
Modern algorithms that form the list of recommended content are often criticized by human rights activists. And there are reasons for this, since the leakage of personal data has been repeatedly recorded. Users due to interactions with certain advertising accounts.
If new rules are adopted, the referral system on social media platforms will be subject to more scrutiny. And tech companies will take more responsibility for the impact of user-generated content.
Conclusion
The development of AI technologies is associated with the risk of spreading disinformation, increasing the vulnerability of digital systems, social manipulation and mass surveillance. The AI Act draws attention to the need to regulate neural network tools in order to prevent possible negative consequences of using Artificial Intelligence technologies.
Order a site now!
Just one step to your perfect website