Abstract:
Artificial Intelligence (AI) and Virtual Reality (VR) are transformative technologies that can advance the principle of substantive equality enshrined in Article 3(2) of the Italian Constitution. When responsibly integrated into edu cational, cultural, healthcare, and occupational contexts, they can remove structural and communicative barriers limiting the participation of persons with disabilities or specific needs. However, their deployment must comply with national, European, and international legal frameworks to ensure innovation promotes inclusion rather than new forms of discrimination. The United Nations Convention on the Rights of Persons with Disabilities (CRPD) and the European Accessibility Act (Directive 2019/882/EU) provide the legal basis for universal access to digital and emerging technologies. Accordingly, AI and VR systems must follow accessibility-by-design principles and non-dis crimination obligations. Projects such as CARESSES—developing culturally adaptive social robots—and various therapeutic or anti-isolation initiatives illustrate their inclusive potential. Nonetheless, immersive systems raise serious concerns regarding the processing of sensitive and biometric data, including facial expressions, gaze track ing, and physiological indicators, all within the “special categories” protected by the GDPR. Article 35 mandates Data Protection Impact Assessments (DPIA) for technologies posing high risks to individual rights and freedoms. Further, liability issues arise in cases of algorithmic discrimination, malfunction, or harm. The proposed AI Act and AI Liability Directive introduce a risk-based framework assigning responsibility to providers and deployers while easing victims’ evidentiary burden. This reflects a broader commitment to transparency and fairness, upholding human dignity and equality before the law.