Some experts in technology wish to put a stop to the further development of artificial intelligence systems before machine learning neural pathways conflict with the purposes for which their human developers intended them. Other computer specialists contend that mistakes are unavoidable and that research and development must go on.
Recently, a petition calling for a six-month embargo on the training of AI systems more potent than GPT-4 was signed by over 1,000 IT and AI experts. Aiming to reduce possible hazards caused by the most dangerous AI technology, proponents call on AI developers to establish safety guidelines.
The petition, which demands an almost immediate public and verifiable suspension by all significant developers, was organised by the nonprofit Future of Life Institute. Governments should intervene and impose a moratorium in the alternative. The Future of Life Institute reports that as of this week, it has gathered more than 50,000 signatures that are now being verified.
Support Not Universal
John Bambenek, lead threat hunter at security and operations analytics SaaS business Netenrich, opined that it is unlikely that anybody will halt anything. Nevertheless, he observes a growing recognition that ethical consideration of AI projects lags far behind the rate of development.
"I think it is good to reassess what we are doing and the profound impacts it will have, as we have already seen some spectacular fails when it comes to thoughtless AI/ML deployments," Bambenek said to TechNewsWorld.
According to Andrew Barratt, vice president of cybersecurity advisory services company Coalfire, everything we do to halt things in the AI domain is basically simply noise. Additionally, it is impossible to carry this out globally in a coordinated manner.
"In the next decades, AI will be the productivity enabler. The risk is in seeing it displace search engines before being monetized by advertising who 'intelligently' insert their products into the responses. What's intriguing is that the'spike' in anxiety appears to have been brought on by the recent attention paid to ChatGPT," Barratt told TechNewsWorld.
Highlighting Legitimate Concerns
Machine learning expert Anthony Figueroa, co-founder and CTO of outcome-driven software development firm Rootstrap, supports the regulation of artificial intelligence but is sceptical that a halt in its development will result in any significant changes, in what may become an increasingly common response to the need for regulation of AI.
Figueroa works with businesses to develop cutting-edge ways to charge for their services using big data and machine learning. But he has doubts that regulators will act swiftly and comprehend the ramifications of what they should control. He compares the difficulties to those social media presented twenty years ago.