Big Model Daily, December 19

News7months agorelease AIWindVane
45 0
Big Model Daily, December 19

[Big Model Daily, December 19] Research uses biological brain mechanisms to inspire continuous learning and allow the survival of the fittest in intelligent systems. Research by Tsinghua Zhu Jun and other teams appears on the cover of a Nature sub-issue. Industry OpenAI announces the ChatGPT security framework: tracking, evaluation, security baseline, etc.


Using biological brain mechanisms to inspire continuous learning and allow the survival of the fittest in intelligent systems, research by Tsinghua’s Zhu Jun and other teams appears on the cover of a Nature sub-issue

https://news.miracleplus.com/share_link/13682

As an important bottleneck in the development of artificial intelligence, especially deep learning, continuous learning has received widespread attention in the field of artificial intelligence in recent years. Most continuous learning methods focus on improving the memory stability of learned knowledge to overcome catastrophic forgetting, such as fixing network parameters for performing old tasks while learning new tasks. However, these methods usually only work in specific scenarios and are difficult to universally adapt to complex environments and tasks in the real world like biological intelligence. Therefore, whether we can learn from the continuous learning mechanism of biological brains and develop new continuous learning methods has always been a common concern in the field of artificial intelligence. In response to this problem, the TSAIL research group of Professor Zhu Jun from the Department of Computer Science of Tsinghua University and the research group of Professor Zhong Yi from the School of Life Sciences recently published a paper titled “Incorporating Artificial Intelligence Methods Incorporating Neural Inspiration Adaptability” in the journal Nature Machine Intelligence. neuro-inspired adaptability for continual learning in artificial intelligence), and was selected as December’s cover article.


AI tools such as GPT-5 and the new version of AlphaFold are worth looking forward to. Nature releases scientific events worthy of attention in 2024

https://news.miracleplus.com/share_link/13683

The rise of ChatGPT has had a profound impact on the scientific community this year. Its creator, OpenAI, is expected to release GPT-5, a next-generation AI model that powers chatbots, late next year. GPT-5 will likely demonstrate more advanced capabilities than its predecessor, GPT-4. Scientists are also watching the launch of Google’s GPT-4 rival Gemini. Large language models can handle many types of input, including text, computer code, images, audio, and video. A new version of AlphaFold, Google DeepMind’s artificial intelligence tool, will also be released next year, and researchers are already using it to predict the 3D shape of proteins with high accuracy. Artificial intelligence will be able to model interactions between proteins, nucleic acids and other molecules with atomic precision, which could open up new possibilities for drug design and discovery. There are significant regulatory issues. The United Nations’ high-level advisory body on artificial intelligence will share its final report in mid-2024, setting out guidelines for the international regulation of artificial intelligence.


Artificial intelligence paves the way for new drugs: Geometric deep learning method can predict the best way to synthesize drug molecules

https://news.miracleplus.com/share_link/13684

Late-stage functionalization is an economical way to optimize the properties of drug candidates. However, the chemical complexity of drug molecules often makes late-stage diversification challenging. To solve this problem, researchers from Ludwig-Maximilians-Universität München, ETH Zurich, and Roche Innovation Center Basel developed a method based on geometric deep learning and high-throughput reactions. Post-functionalization platform for screening. Considering that borylation is a key step in later functionalization, the computational model predicts the reaction yield under different reaction conditions with an average absolute error range of 4-5%; the reactivity of the model for new reactions with known and unknown substrates For classification, the balanced accuracies were 92% and 67% respectively. The regioselectivity of the major product was accurately captured with a classifier F-score of 67%. When applied to 23 different commercial drug molecules, the platform successfully identified numerous opportunities for structural diversification.


OpenAI announces ChatGPT security framework: tracking, assessment, security baseline, etc.

https://news.miracleplus.com/share_link/13685

On December 19, OpenAI announced the beta version of the “Preparedness Framework” on its official website. This document details how OpenAI ensures the security protection measures, development and deployment processes of products such as ChatGPT. OpenAI stated that as the function iteration of large models continues to improve, its capabilities have begun to approach primary AGI (artificial intelligence), and safety has become the top priority in developing AI models. Therefore, OpenAI hopes to make the security framework of the AI ​​model transparent by publishing it in detail, so that society and users can have a deep understanding of the working mechanism of the model and ensure that it is applied in actual business in a safe and healthy way. At the same time, it lays a safe foundation for the development of super models.


Any large model can be multi-modal, Apple opens source 4M

https://news.miracleplus.com/share_link/13686

As models such as ChatGPT are widely used, users’ demand for functions has also developed in a multi-modal manner. For example, a single model can generate both text and images. However, existing vision models are usually only optimized for a single modality and task, and lack the general ability to handle multiple modalities and tasks. In order to solve this problem, Apple researchers and the world-famous public university EPFL (Ecole Polytechnique Fédérale de Lausanne, Switzerland) jointly developed the 4M framework and will soon open source it. 4M can integrate multiple input/output modalities, including text, images, geometry, semantic modalities, and neural network feature maps, etc., all into large models (applicable to Transformer architecture).


Adobe terminates $20 billion Figma acquisition plan: regulatory resistance is difficult to overcome

https://news.miracleplus.com/share_link/13687

In September 2022, industry giant Adobe announced that it would acquire Figma, a well-known UI and UX design tool suite maker, for up to $20 billion. As soon as the news came out, the design circle quickly responded with a lot of negative reactions. Designers’ biggest concern seems to be that Adobe will ruin or even discontinue Figma’s product. There’s also speculation that Adobe will integrate Figma’s product ideas into Adobe’s own products in a half-assed way. After all, Figma offers advanced features that allow entire teams to work together across platforms from any device and is one of the strongest competitors to Adobe’s UX/UI design app Adobe XD.


Spichi announced that its self-developed large model DFM-2 has passed the filing

 

https://news.miracleplus.com/share_link/13688

Spichi announced that its self-developed large model DFM-2 has been registered under the “Interim Measures for the Management of Generative Artificial Intelligence Services”. It is the first company in Jiangsu Province to pass the registration and can be officially opened to the whole society.


Jaxon AI partners with IBM Watson to solve large model hallucination problem

https://news.miracleplus.com/share_link/13689

Jaxon Al Originally built for the U.S. Air Force to build AI systems with the highest reliability and accuracy requirements, the startup is now expanding into the wider enterprise using a proven technology called Domain-Specific Artificial Intelligence Language (DSAIL) market, the technology incorporates the IBM WatsonX base model and aims to solve a major challenge in the field of artificial intelligence: illusions and inaccuracies in large language models.

© Copyright notes

Related posts

No comments

No comments...