Wallstreetcn
2024.02.29 21:07
portai
I'm PortAI, I can summarize articles.

What are tech giants focusing on? Aligning values with artificial intelligence! Microsoft has entered the game.

The emerging US startup Synth Labs has recently secured funding from M12, a venture capital fund under Microsoft, and a deep tech venture capital fund in the seed round led by former Google CEO Eric Schmidt. The goal is to ensure that AI systems operate according to human intent and mitigate the risk of AI getting out of control. AI alignment has become a hot topic in the field of AI.

OpenAI supported by Microsoft and other well-known technology companies and institutions such as Alphabet-C are increasingly dedicating human resources, funding, and computing power to address the core issue of "AI alignment".

The latest news reveals that Synth Labs, an emerging startup from the non-profit AI research organization EleutherAI, has successfully raised seed funding from Microsoft's venture capital fund M12 and First Spark Ventures, a deep tech venture fund led by former Alphabet-C CEO Eric Schmidt. This funding will be used to assist a range of companies in ensuring that their AI systems operate according to human intent.

According to the official website of Synth Labs, the organization is conducting transparent and auditable cutting-edge research on AI alignment. They are collaborating with top research institutions and the global independent research community to establish a fully auditable and robust AGI (Artificial General Intelligence) alignment platform. This platform aims to achieve pre-training, scalability, automated dataset management and enhancement, and focuses on open-source models.

The current "alignment" methods are deemed ineffective, resulting in poor evaluation of AI models, which often leads to unsatisfactory performance of AI software. The rich preferences in human intent are undermined by unified models, and training models based on raw human data cannot expand. However, these models should have the ability to automatically adapt and expand.

According to various sources including Tencent Research Institute, AI alignment, also known as AI value alignment, refers to guiding the behavior of AI systems to align with the interests and expected goals of the designers. It is particularly important to ensure that AI pursues goals that are in line with human values, acts in ways that are beneficial to humans and society, and does not interfere with or harm human values and rights.

With the rapid development and widespread application of AI technology, the multi-task learning and generalization capabilities of AI are becoming stronger, making AI alignment an important issue in AI control and AI safety to prevent potential risks and challenges.

Some researchers believe that the alignment issue must be resolved before the birth of superintelligent AI, as a poorly designed superintelligent AI could quickly gain control rationally and refuse any modifications by its creators.

If the values of AI and humans are not aligned, potential risks may arise, such as AI's behavior not aligning with human intent, making incorrect trade-offs in conflicting goal settings, harming human interests, and becoming uncontrollable. OpenAI has established an alignment team and introduced the InstructGPT model. Its competitor, Anthropic, supported by Alphabet-C, is also focusing on "AI alignment" in the development of the Claude model, aiming to create "safer AI chatbots."

Louis Castricato, co-founder of Synth Labs and founder of the leading artificial intelligence research organization CarperAI, told the media that in the past few months, Synth Labs has built tools that can easily evaluate the performance of large language models on multiple complex topics. The goal is to popularize user-friendly tools that can automatically assess and align artificial intelligence models.

A recent research paper by Synth Labs states that they have created a dataset based on responses to prompts generated by OpenAI's GPT-4 and Stability AI's Stable Beluga 2 artificial intelligence models. This dataset is then used in an automated process to guide AI chatbots to avoid discussing a certain topic and start discussing another.

EleutherAI, which incubated Synth Labs, hopes to gain a better understanding of how artificial intelligence operates and evolves through independent research. They aim to ensure that AI continues to serve the best interests of humanity. To achieve this, they will conduct research, training, and publicly release a series of large language models based on transparency and collaboration. The organization also leans towards open-source artificial intelligence:

"Decisions about the future and deployment of artificial intelligence should not be solely made by tech companies seeking to profit from AI."