That the fears of AI generated materials could go berserk are based on principally two fears. The fundamental scepticism is that the systems so far developed are suspected to have hidden mental powers which are not properly understood by even their creators.
This is like the Frankenstein fear, in which the product of the scientific research and effort comes alive with a mind of its own. If that happens, then, maybe, that mind could be more capable than the human mind and act independently.
Secondly, the systems developed are so powerful and versatile, so fast and productive that they could churn out oceans of material and flood the internet. They could so faithfully replicate materials that it could be well nigh impossible to detect a fake from a real and original.
For example, so far, only accomplished artists could produce copies and replicas of renowned fakes of great artists. But even after their best efforts, the copies could be detected from the original by art connoisseurs because of their knowledge of the techniques of the real artists.
Now, AI experts claim, the AI systems could generate copies which replicate even the defects and peculiar shortcomings of the real artists. A Lenardo da Vinchi of an AI system would look as convincing as the original ones, including the warts and all.
That is so deplorable, that some great artists should be derided in this manner. But AI systems could be a threat to society at large by producing fake news and sinister political or social contents for the internet which can trigger large scale violence or disturbances. The floods of materials, constructed by AI systems would be scarcely recognised as fake and motivated creations of evil minds and wrong doers by the public. Countries could go up in flames or societies could be convulsed in violence, experts anticipate.
It is against this background that some of the most informed minds in the field are rising against their own creations. Sam Altman, chief executive of OpenAI, the company which had created the models “ChatGPT 4” which showed path-breaking capabilities for creating human like texts and responses to mere text commands, now wants control over AI.
The entire world was stunned by the performance of two AI models, released by Microsoft and Google respectively, which surprised even the experts by their abilities to produce human like responses and written texts. One of the models had successfully cleared the extremely difficult law examinations of the US bar, among the achievements, in a surprisingly short time. People struggle for years to clear the exam.
But now the chief executive of OpenAI, the software development company which had created the ChatGPT 4 system, had urged immediate government intervention to regulate future researches into AI systems.
He told a US Senate Committee today that if the large language models —LLMs— that was behind the ChatGPT4 system could go wrong, it could go seriously wrong.
The first of these really concerned voices was that of Geoffrey Hinton. Two weeks back, Geoffrey Hinton, who was the pioneer creator of AI systems, had quit Google, where he was leading the company’s path-breaking initial efforts at creating the “machine learning” technology, left the company to take advocacy.
He felt that it was time to globally co-ordinate and regulate the new AI researches for creating advanced system which approximates mimicking human intelligence. An AI generative model had produced paintings by Van Gogh which was difficult to differentiate from original ones.
Geoffrey Hinton of Toronto University had started his initial researches into artificial intelligence twenty years back with two of his graduate students. Much later his systems came to be known and described as “neural network”. This is because his systems worked in the same manner as the human brain.
Hinton’s works eventually led to the development of what we have seen recently — ChatGPT4 or other systems. But now, the models developed are showing some streaks of their own intelligence and thinking, according to some experts.
Hinton believed that artificial intelligence was an “existential threat to humans.” This is because the new AI systems have the potential to becoming more intelligent than humans. Hinton recently said that the kind of intelligence “we are creating is different from human intelligence”.
He feared that if some “bad actors” picked these systems and could modify them into something sinister. He feared that these loads of artificial texts produced by the AI systems could be used by dictators and such power grabbers to mislead people and influence voting.
Similarly, materials could be produced to create a political atmosphere than desirable by relaxing these GPT produced articles or items. However, the problem lay in global co-ordination for regulations of these researches. The question is will all countries agree on the same norms for control and will all states agree to regulations.
Will all countries abide by regulations which could be developed for the sector? It is the same kind of question which arises in DNA research. Will scientists get down to creating genetically modified basses for super achievement — a genius scientist or a super sportsperson. It is like playing God. (IPA Service)
ALARMED AT THE IMPACT, CREATORS OF AI ARE NOW TALKING OF STRICT REGULATION
VALUE BASED DEMOCRATIC FUNCTIONING OF SOCIETY INCLUDING MEDIA FACING THREAT
Anjan Roy - 18-05-2023 12:33 GMT-0000
Some of the most successful practitioners and developers of “artificial Intelligence” are now calling for its regulation by the government. These creators of the strikingly advanced AI systems are underlining immediate need for global regulation.