But the challenge doesn’t just come from decoding, it arises in the structure or at the base, starting with liability. Who is responsible if an AI causes damage? How can we guarantee that AI doesn’t discriminate? How can generative AI be regulated?
It should be noted that AI is not considered a ‘subject of law’, which means it has no legal personality. Therefore, if an AI causes damage, the responsibility will fall on whoever created it, trained it, implemented it, or used it.
But if we’re talking about generative AIs – those that don’t copy, but create new content based on learnt patterns – the answer may not be so simple and some believe that a type of ‘electronic personality’ should be created.
Alongside this comes the question of technical complexity: algorithms, AI, blockchain, or cybercrime require specific knowledge, both on the part of lawyers and magistrates, which is why a specialised court could be the appropriate response; it would require training in technology, or at least adequate technical support, and would result in quick decisions, more agile procedures, perhaps even online sessions, adapted to the digital environment itself, which would guarantee uniformity with regard to interpretation and consequently decisions.
Other (many) questions could be raised, but I’ll leave them for another occasion.