Overview
With the growing integration of artificial intelligence (AI)-based solutions into every area of our lives, it is essential to examine its underlying processes from a scientific perspective. A deeper understanding of the mechanisms underlying AI, especially large language models (LLMs), is critical to addressing their limitations, such as biases, lack of robustness, and the challenges of adaptability in dynamic environments. By adopting rigorous scientific methods, we can not only improve the accuracy and reliability of AI models, but also ensure that they are consistent with ethical standards and societal values. This research is of particular importance for the development of AI systems, which in the near future aim to be not only powerful, but also reliable, fair and accountable.
The current stage of AI shows remarkable advances in natural language processing and human-computer interaction. These models have demonstrated unprecedented capabilities in understanding and generating human-like text, leading to widespread application in a variety of fields, from automated query response systems to sophisticated research tools. Despite these achievements, significant challenges remain. LLMs can inherit and even reinforce social biases embedded in their training data, leading to distorted, unethical or even harmful outcomes, such as racist remarks, misinformation or the reinforcement of stereotypes. Furthermore, the real-time adaptability of these models, their ability to function effectively in multi-agent systems, and their consistency with human preferences remain critical areas for active research and future studies.
To overcome these challenges and fully exploit the potential of AI, we need to adopt a systematic and scientific approach. This includes the detailed study and refinement of AI models, the development of robust and diverse datasets, and the application of advanced fine-tuning and optimization techniques. A deep understanding of the underlying processes of AI at a scientific level can lead to the creation of models that are more resilient, adaptive, and able to function in complex, dynamic environments. In addition, this approach opens up opportunities to create innovative methods to assess and reduce bias, ensuring that AI systems are not only fair and inclusive, but also able to detect and counter issues such as racism, misinformation and other forms of harmful content.
Team
- Coordinator of the Research Team Prof. Dr. Tatiana Atanasova IICT-BAS
- Prof. Dr. Desislava Ivanova Technical University of Sofia
- Assoc. Prof. Dr. Kristina Dineva IICT-BAS
- Dr. Plamen Petrov IICT-BAS
- Dr. Viktor Danev IICT-BAS
- Velizar Varbanov IICT-BAS
- Kalin Kopanov IICT-BAS
Work Packages & Roadmap
Publications & Results
Details will be added soon.
Resources
Details will be added soon.