Collection of large model daily reports on December 13

News6months agorelease AIWindVane
27 0
Collection of large model daily reports on December 13

[Collection of large model daily reports on December 13] All-round open source with no dead ends, Xingbo team’s LLM360 makes large models truly transparent; moving towards 100 times acceleration: full stack Transformer inference optimization; using 2% of the computing power of RLHF to make LLM stopped harmful output, Byte proposed LLM forgetting learning; the training lasted for two and a half years, Tesla humanoid robot Optimus second generation was launched; Transformer started a business again! Received 400 million new financing, Google, Nvidia and AMD participated, previously developing in stealth mode for a long time


All-round open source with no dead ends, Xingbo team’s LLM360 makes large models truly transparent

 

Link: https://news.miracleplus.com/share_link/13100

Open source models are showing their vigorous vitality, not only the number is increasing, but the performance is getting better and better. Proprietary models have demonstrated extraordinary power in terms of technical performance and innovation capabilities, but their non-open source nature has become an obstacle to the development of LLM. Researchers from Cerebras, Petuum and MBZUAI jointly proposed LLM360. This is a comprehensive open source LLM initiative that advocates providing the community with everything related to LLM training, including training code and data, model checkpoints, and intermediate results. The goal of LLM360 is to make the LLM training process transparent and reproducible for everyone, thereby promoting the development of open and collaborative artificial intelligence research.


Use RLHF 2% of the computing power to stop LLM from harmful output, and Byte proposes LLM forgetting learning

 

Link: https://news.miracleplus.com/share_link/13101

With the development of large language models (LLM), practitioners face more challenges. How to avoid harmful replies from LLM? How to quickly delete copyright-protected content in training data? How to reduce LLM hallucinations (false facts)? How to quickly iterate LLM after data policy changes? These issues are critical to the safe and trustworthy deployment of LLM under the general trend of increasingly mature legal and ethical compliance requirements for artificial intelligence. ByteDance proposes a method for LLM to perform forgetting learning for alignment. This article studies how to perform “forgetting” operations on LLM, that is, forgetting harmful behaviors or forgetting learning (Machine Unlearning). The author shows the obvious effects of forgetting learning on three LLM alignment scenarios: (1) deleting harmful outputs; (2) ) Remove infringing protected content; (3) Eliminate the big language LLM illusion.


Towards 100x acceleration: full-stack Transformer inference optimization

 

Link: https://news.miracleplus.com/share_link/13102

This article discusses full-stack Transformer inference optimization, from hardware specifications such as the A100 memory hierarchy, to MLSys methods such as FlashAttention and vLLM, to model architectures such as Expert Mixing, and decoding algorithms such as Speculative Decoding and its variants. We identified a fundamental fact: Transformer inference is memory bound, and most optimizations (whether from MLSys or modeling) are based on/exploited by this fact. Just like adding buffs to a role-playing game, you can see how Transformer inference gradually expands and speeds up.


The first GPT-4 powered humanoid robot! No programming required + zero-sample learning, and behavior can be adjusted based on verbal feedback

 

Link: https://news.miracleplus.com/share_link/13103

What would it look like to let GPT-4 control a humanoid robot without any prior programming or training? from the University of Tokyo and Japanese Alternative Machine Company. Explored the first study of a humanoid robot powered by GPT-4. Relying on this research, users do not need to program the robot in advance. They only need language input, that is, chatting with GPT-4 for a while, and the robot can complete actions according to the instructions.


After two and a half years of practice, the second generation of Tesla’s humanoid robot Optimus is online

 

Link: https://news.miracleplus.com/share_link/13104

Without any warning, the second generation of Tesla’s humanoid robot “Optimus” is here. On the morning of December 13, Musk suddenly released a video on X: Without much explanation, Musk directly used the video content to demonstrate Optimus’ many new capabilities. It took two and a half years of practice, from conception to toddlerhood, to triggering the uncanny valley effect, and Optimus’s appearance this time stunned the world.


Microsoft’s small model beats the big model: 2.7 billion parameters, mobile phone can run

 

Link: https://news.miracleplus.com/share_link/13105

Last month, Microsoft CEO Nadella announced at the Ignite conference that the self-developed small-size model Phi-2 will be completely open source, with significant improvements in performance in common sense reasoning, language understanding and logical reasoning. Today, Microsoft announced more details about the Phi-2 model and its new prompting technology, promptbase. This model with only 2.7 billion parameters outperforms Llama2 7B, Llama2 13B, Mistral 7B, and closes the gap (or even better) with Llama2 70B on most common sense reasoning, language understanding, mathematics, and coding tasks. At the same time, the small size of Phi-2 can be run on mobile devices such as laptops and mobile phones. Nadella said that Microsoft is very happy to share its best-in-class small language model (SLM) and SOTA prompt technology with R&D developers.


Transformer is the first game to start a new business! Received 400 million new financing, Google, Nvidia and AMD participated, previously developing in stealth mode for a long time

 

Link: https://news.miracleplus.com/share_link/13106

The large model company founded by the author of Transformer received another investment of US$56.5 million, which is more than 400 million yuan converted into RMB. Nvidia, AMD, and Google, the old employer of the two founders, are all participants in this round of financing. Including the financing received in the seed round, the company, which was founded less than a year ago, has received nearly $65 million.


Sequoia America and Index invest $54 million in this German AI supply chain company!

 

Link: https://news.miracleplus.com/share_link/13107

German startup Tacto Technology GmbH recently closed a funding round led by Sequoia Capital and Index Ventures, totaling $54 million in funding. Tacto uses AI technology to help companies identify cost-saving opportunities and analyze the pricing of key costs such as raw materials and energy. They say this approach can reduce procurement spending by about 10%.

© Copyright notes

Related posts

No comments

No comments...