Решением Высшего Совета по науке и технологическому развитию Академии наук Молдовы и Национального Совета по аккредитации и аттестации журнал «Право и политология» признан как издание в области права и политологии, относящееся к научным журналам категории «Б» (решение № 151 от 21 июля 2014 года).

 

Журнал «Право и политология» является международным изданием научных партнеров

 

ИНСТИТУТ ЮРИДИЧЕСКИХ И ПОЛИТИЧЕСКИХ ИССЛЕДОВАНИЙ

 АКАДЕМИИ НАУК МОЛДОВЫ

 ГЕЛАТСКАЯ АКАДЕМИЯ НАУК (ГРУЗИЯ)

ВЫСШАЯ ШКОЛА ПО БЕЗОПАСНОСТИ И ЭКОНОМИКИ (БОЛГАРИЯ)


ПРАВО И ПОЛИТОЛОГИЯ / страница:

Zahra MUSAYEVA (Azerbaijan), 
 researcher in international relations
 
PROBLEMS OF LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE AND ITS IMPLEMENTATION IN GOVERNANCE
 
Artificial intelligence (AI) is a field of computer science focused on developing systems capable of performing tasks that typically require human intelligence, such as perception, decision-making, problem-solving, language processing, and learning. Unlike traditional programs that operate according to predefined algorithms, AI systems can adapt, improve, and optimize their actions based on accumulated experience. Depending on the level of complexity and autonomy, AI can be classified into several types, including narrow (or weak) AI, which tackles specific tasks, and general AI, which is capable of performing a broad range of intellectual functions similar to human abilities.
The development of AI is closely linked to advancements in machine learning and neural network technologies, which enable systems to analyze large volumes of data and uncover hidden patterns. This capability allows for more efficient problem-solving across various fields, from healthcare and education to finance and public administration. In recent years, particular attention has been given to the field of deep learning, which enables AI systems to autonomously improve their capabilities. This represents a significant step forward compared to previous programming methods.
The ethical and philosophical aspects of AI usage are becoming increasingly important, as the advancement of technology raises questions related to rights and moral norms. For instance, how can we ensure that decisions made by AI are fair, ethical, and do not violate human rights? In this regard, researchers and experts are actively discussing the need to establish principles of ethical AI, which should address not only technological but also humanistic concerns, such as privacy protection, non-discrimination, and the upholding of justice [1].
Furthermore, the development of AI presents new challenges for society in the context of human-machine relationships. AI systems, such as autonomous vehicles or healthcare prediction systems, are capable of making decisions that can impact people's lives. In this context, the question arises of who is responsible for potential errors made by AI and what legal mechanisms need to be developed to ensure adequate legal protection for citizens and prevent potential threats.
Thus, a theoretical understanding of AI is impossible without considering both its technological components and the challenges it poses to existing legal, ethical, and social norms.
One of the main challenges in the legal regulation of artificial intelligence is the lack of unified international standards, which creates a legal vacuum. To date, many countries have developed their own approaches to regulating AI, but these approaches vary significantly in terms of their rigor and focus. In some countries, there is a strong emphasis on the ethical aspects of AI usage, while others focus on security and data protection. This uneven regulation creates difficulties in cross-border operations and international cooperation, particularly in the context of a globalized world where data and technologies freely move between countries [2].
Another key issue is the question of responsibility for the actions of AI. In cases where artificial intelligence makes decisions that lead to harm (for example, in accidents involving autonomous vehicles or errors in medical diagnostics), a complex question arises: who is responsible for these actions? Is it the developer of the system, the user, or the AI itself, if it possesses elements of autonomy? Currently, legal systems are largely unprepared to provide adequate answers to these questions, leading to legal uncertainty and difficulties in judicial practice.
Special attention must be given to the protection of human rights and privacy in the era of AI. The implementation of AI technologies across various sectors, such as healthcare, education, and law enforcement, may infringe upon fundamental citizens' rights, including the right to privacy and the protection of personal data. Issues related to the processing of vast amounts of personal information collected by AI systems become especially pertinent. The use of data without proper consent from the information owners or for the purpose of manipulating public opinion threatens fundamental human rights principles, which calls for the development of new legal tools to protect these rights [3].
Furthermore, the development of AI may impact democratic processes and legal systems. For example, the use of AI in election campaigns or voting systems can raise concerns about the manipulation of public opinion or even the risk of election interference. AI can be used to create "deepfakes," which may undermine political stability and citizens' trust in institutions. These risks necessitate the development of effective legal oversight mechanisms to ensure transparency and security in such processes.
Finally, an important aspect is the regulation of AI system security. AI systems, especially those managing critical infrastructure, must be protected from external threats such as cyberattacks. However, there is also the risk that AI systems themselves could be used for criminal purposes, such as creating new types of weapons or committing cybercrimes. All of this calls for the creation of new norms and standards aimed at ensuring the security of AI and preventing its use for illegal purposes.
Thus, the issues of legal regulation of AI are multifaceted and require a comprehensive approach that takes into account not only technological but also socio-economic and political factors.
The integration of artificial intelligence into governance represents one of the most promising yet risky areas, as it impacts not only the efficiency of governmental and administrative processes but also the very foundations of democratic institutions. AI can significantly improve the operations of public authorities by enhancing the speed and accuracy of decision-making. For example, the use of AI in tax administration can help more effectively identify and prevent fraud, while in healthcare, it can accelerate diagnostics and optimize the allocation of medical resources. At the same time, the implementation of AI in governance presents new legal challenges, including issues of security, transparency, and accountability [4].
One of the advantages of using AI in governance is the ability to create more efficient management systems capable of analyzing large volumes of data and making decisions based on objective criteria, thereby minimizing the impact of human subjectivity and errors. AI systems can significantly enhance the speed and accuracy of information processing, improve forecasting, and provide better analysis of the current situation, which is especially important in areas such as crime prevention, emergency management, and social policy. This can lead to a more balanced and efficient allocation of resources, improved quality of public services, and increased public trust in governmental institutions.
However, in practice, the implementation of AI in government structures is accompanied by a number of problems related to the legal and ethical aspects of this process. First and foremost, these are issues of accountability and transparency in decision-making. Autonomous systems that make decisions based on algorithms and data may operate without sufficient human oversight, creating risks of abuse and injustice. For example, the use of AI in the justice system or law enforcement could lead to erroneous decisions if the algorithms are biased or poorly trained. Without adequate legal oversight and monitoring, such systems could make decisions that violate citizens' rights or fail to adhere to the principles of fairness and equality before the law [5].
The implementation of AI also carries the risk of citizens losing control over decision-making processes. The use of AI in government institutions could lead to excessive automation of processes, where crucial decisions are made not by humans, but by machines. This threatens democratic principles such as transparency and citizen participation in governance. If AI begins to replace key decisions, such as budget allocation or appointments to high government positions, the question arises: who is responsible for such decisions? It is important that artificial intelligence remains a tool, not an independent decision-making entity acting without proper oversight.
Moreover, the implementation of AI in governance requires a clear legal framework that ensures data security and protection of citizens' rights. With the widespread use of AI, new risks emerge for personal information as well as for national security. It is crucial that legal systems and agencies responsible for data protection are prepared for new challenges related to the processing of large volumes of data, ensuring their protection against leaks or manipulation, and preventing the potential use of AI for surveillance of citizens without their consent [6].
Thus, the implementation of artificial intelligence in governance requires a balanced approach that takes into account both technological capabilities and legal and ethical aspects. It is essential to develop clear regulations governing the use of AI in public institutions to ensure its application in the best interests of society while protecting citizens' rights.
The future of legal regulation of artificial intelligence lies in creating a flexible and adaptive legal environment capable of responding swiftly to new challenges arising from technological advancements. Ideally, legal systems should address all aspects of AI usage, from protecting individual rights to ensuring security and ethical decision-making, in order to strike a balance between innovation and the protection of human rights. One of the key directions for the future will be the development of international standards that provide a unified legal framework for regulating AI at a global level. This may involve the creation of universal rules and principles that will be applied across all countries, helping to avoid discrepancies in regulatory approaches [7].
Particular attention must be given to the creation of "AI ethics" principles, which will serve as the foundation for regulating the use of technologies across various areas of life. These principles should include commitments to privacy protection, prevention of discrimination, ensuring algorithmic transparency, and mandatory oversight of AI actions. Such rules can be developed both at the national level and through international organizations such as the UN or the EU, with the involvement of the scientific community, business representatives, and human rights advocates. The primary objective will be to ensure that AI is used for the benefit of society, without infringing on citizens' rights or violating fundamental norms.
On the other hand, an important area is the creation of mechanisms for accountability for the actions of AI. Developing legal norms that regulate responsibility for damages caused by AI will require the establishment of new forms of legal liability that take into account both the capabilities of the technologies and the specificities of their application. This will necessitate amendments to existing laws on civil and criminal liability, as well as the development of specific norms for cases where AI actions lead to unpredictable consequences or violations of citizens' rights. It is crucial that these norms protect citizens from potential threats while not restricting innovation and technological development [8].
Furthermore, for effective regulation of AI, it is essential to strengthen international cooperation and coordination in the development of security and data protection standards. In the context of globalization, where AI systems and data cross borders and the differences in legal approaches between countries may create gaps in the protection of citizens' rights, it is crucial to ensure collaboration between states and international organizations. This may involve the exchange of best practices, the development of common principles in data protection and IT infrastructure security, as well as the creation of specialized international bodies responsible for monitoring and regulating AI-related issues.
Finally, an important part of legal regulation will be the development of educational programs aimed at training lawyers, information technology specialists, and other professionals who will be involved in the legal support of AI-related projects. This will not only increase the level of knowledge in the fields of technology and legal norms but also foster a culture of responsible and ethical AI usage in business and public administration [9].
Overall, the future of AI legal regulation requires a comprehensive approach that combines innovative mechanisms, accountability for the use of technologies, and respect for human rights. Only in this way can we ensure that AI will be used to create a safe and just society.
 
Bibliography:
1. Binns, R. On the Ethics of Artificial Intelligence: A Systematic Review of the Literature. Philosophy & Technology, 2018, 31(4), р. 487-517.
2. Bryson, J. J., & Winfield, A. F. Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 2017, 50(10), р. 116-119.
3. Goodall, N. J. Machine Ethics and Autonomous Vehicles. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, 2014, р. 3-10.
4. Lin, P., Abney, K., & Bekey, G. A. Autonomics and Ethics: From Principles to Practice. 2017, Springer.
5. Russell, S., & Norvig, P. Artificial Intelligence: A Modern Approach (3rd ed.), Pearson, 2016.
6. Smith, B. W. The Future of Artificial Intelligence Regulation: A Comparative Analysis. Harvard Law Review, 2018, 131(2), 255-289.
7. Veldman, J., & Dignum, V. Ethical Artificial Intelligence: Design and Implementation of Autonomous Systems. 2020, Springer.
8. Walden, I., & Murdoch, C. Legal Frameworks for the Regulation of Artificial Intelligence: Challenges and Solutions. Oxford University Press, 2020.
9. Winfield, A. F. T., & Jirotka, M. Ethical Governance is Key to Ensuring the Safe and Fair Use of AI. Nature, 2018, 562 (7726), р. 2-4.

 
 
 
Zahra MUSAYEVA (Azerbaijan), 
 researcher in international relations
 
PROBLEMS OF LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE AND ITS IMPLEMENTATION IN GOVERNANCE
 
Summary. The relevance of the topic of legal regulation of artificial intelligence (AI) and its implementation in governance becomes evident in the context of rapid technological progress and the increasing penetration of AI into various spheres of public life. Every year, AI systems become more complex, autonomous, and capable of making decisions that directly impact the rights and freedoms of citizens. These processes pose significant challenges for legal systems in the modern era: how to properly regulate technologies that have the potential to alter the economic, social, and political landscape? How can human rights and civil liberties be ensured in an era of ubiquitous AI usage?
The aim of this article is to examine key issues in the legal regulation of AI and identify legal and ethical challenges associated with its integration into government institutions. The study will particularly focus on analyzing existing legal gaps, questions of responsibility for AI actions, and practical examples of AI usage in public structures. It is important to note that given the lack of universal international standards for AI regulation, the topic is not only legal but also political, as it requires cooperation between states and various stakeholders at the level of international organizations. 
 
 
 Захра МУСАЕВА (Азербайджан),
 
исследователь в области международных отношений
 
ПРОБЛЕМЫ ПРАВОВОГО РЕГУЛИРОВАНИЯ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА И ЕГО ВНЕДРЕНИЯ В УПРАВЛЕНИЕ
 
Резюме. Актуальность темы правового регулирования искусственного интеллекта и его внедрения в управление становится очевидной в условиях быстрого технологического прогресса и все более глубокого проникновения ИИ в различные сферы общественной жизни. С каждым годом системы ИИ становятся все более сложными, автономными и способными принимать решения, которые непосредственно влияют на права и свободы граждан. Эти процессы ставят перед правовыми системами нового века важнейшие вопросы: как правильно регулировать технологии, которые способны изменять экономическую, социальную и политическую картину мира? Каким образом можно обеспечить соблюдение прав человека и гражданских свобод в условиях повсеместного использования ИИ?
Целью данной статьи является исследование ключевых проблем правового регулирования ИИ, а также выявление правовых и этических вызовов, связанных с его внедрением в органы государственного управления. В процессе исследования особое внимание будет уделено анализу существующих правовых пробелов, вопросов ответственности за действия ИИ, а также практическим примерам использования ИИ в государственных структурах. Важно отметить, что с учетом отсутствия универсальных международных стандартов правового регулирования ИИ, тема становится не только юридической, но и политической, поскольку требует взаимодействия между государствами, а также между различными субъектами на уровне международных организаций.

  • LAW AND POLITOLOGY
    International scientific journal
    Website: www.law-politology.az
    Email: [email protected]