Big Model Daily, January 25

News5months agoupdate AIWindVane
18 0
Big Model Daily, January 25

[Big Model Daily, January 25] Research “think step by step” is not enough, make the model “think more steps” more useful. The three industrial giants Hinton, LeCun, and Bengio, Ma Weiying, Chen Haibo and other Chinese were selected


“Think step by step” is not enough, make the model “think more steps” more useful

https://news.miracleplus.com/share_link/16768

Nowadays, the emergence of large language models (LLMs) and their advanced prompting strategies marks a significant progress in the research of language models, especially in classic NLP tasks. One of the key innovations is the Chain of Thought (CoT) prompting technology, which is known for its capabilities in multi-step problem solving. This technology follows human sequential reasoning and demonstrates excellent performance across a variety of challenges, including cross-domain, long-term generalization, and cross-language tasks. CoT, with its logical, step-by-step reasoning approach, provides crucial explainability in complex problem-solving scenarios. Researchers from Northwestern University, University of Liverpool, and New Jersey Institute of Technology further explored the relationship between the length of reasoning steps and the accuracy of conclusions, helping people deepen their understanding of how to effectively solve NLP problems. The following article explores whether the inference step is the most critical part of the prompt that makes CoT work (see Figure 1). Variables were strictly controlled in the experiments of this article, especially when adding new reasoning steps, the researchers will ensure that no additional knowledge is introduced. In the zero-sample experiment, the researchers adjusted the initial prompt from “Please think step by step” to “Please think step by step and think of as many steps as possible.” For the small sample problem, the researchers designed an experiment to extend the basic reasoning steps while holding all other factors constant.


Tang Xiaoou’s disciple leads the team: long video generation without tuning, supporting 512 frames! Any diffusion model can be used

https://news.miracleplus.com/share_link/16769

Now, someone has proposed a very effective tuning-free method, which can be directly applied to the pre-trained video diffusion model. It can support up to 512 frames (assuming the frame rate is 30fps, in theory, it can generate a work of about 17 seconds long). Can be applied to any video generation model, such as AnimateDiff, LaVie, etc. It also supports multi-text generation, such as making a camel run and stop for a while. This result comes from Tencent AI Lab, Nanyang Technological University and Hong Kong University of Science and Technology, and was selected for ICLR 2024.


How will AI change various disciplines in the next five years? From LLM to AI protein design, medical care…

https://news.miracleplus.com/share_link/16770

Five years ago (January 2019), “Nature Machine Intelligence” was launched. Of course, five years ago seemed like a different era when it came to artificial intelligence (AI). On January 24, Nature Machine Intelligence magazine once again contacted and interviewed authors who had recently published comments and opinion articles in the journal in the special topic “Anniversary AI reflections” and asked them to give examples of artificial intelligence from their respective fields. How to change the scientific process. Want to know what other topics in AI they are excited about, surprised about, or worried about, and what their hopes and expectations are for AI in 2024 and over the next five years. A recurring theme is the continued development of large language models and generative artificial intelligence, their transformative impact on the scientific process, and concerns about ethical implications.


The three giants of Hinton, LeCun, and Bengio, Ma Weiying, Chen Haibo and other Chinese were selected, and the 2023 ACM Fellows were announced

https://news.miracleplus.com/share_link/16771

Today, the Association for Computing Machinery (ACM) announced the latest list of Fellows. Founded in 1947, ACM is one of the most influential professional academic organizations in the field of computing in the world. ACM Fellow is an honor awarded to senior members by the organization. The purpose is to recognize the top 1% of scholars among members who have contributed to computer-related fields. The review process is very strict and is selected once a year. Researchers are nominated by their peers and the nominations are reviewed by a committee. There are 68 newly selected scientists this year, and their contributions cover areas such as network security, human-computer interaction, mobile computing and recommendation systems. Surprisingly, the three giants of deep learning, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, who have won the ACM Turing Award, were all selected as ACM Fellows this year. This year’s nominees also include Ma Weiying, Chen Haibo and many other Chinese scholars.


Robots around the world share the same brain, and Google DeepMind has completed the first step

https://news.miracleplus.com/share_link/16772

In the past year, the core keyword for the development of generative artificial intelligence has been “big”. People have gradually accepted the idea of ​​making full use of computing power to “make miracles” by the pioneer of reinforcement learning Rich Sutton. The huge amount of data is the core reason why AI models show amazing wisdom. The larger the data scale, the higher the quality, and the more detailed the annotations, the more comprehensive the world knowledge the model can understand, thereby fitting more intelligent results. So why haven’t advances in artificial intelligence translated into the kind of all-powerful butler robots seen in science fiction movies? Where are the robots that can clear tables, fold laundry, and make breakfast? An important reason is that it is difficult to “make miracles” in the field of robotics. The text or picture training data of generative AI can be easily obtained on the Internet, while the training data of robots are usually provided by researchers in the laboratory based on specific tasks. Created one by one. This process is often long and tedious. In order to get the answer to the question, 34 robotics laboratories from North America, Europe, and Asia jointly launched the RT-X project, initiated by Google Deepmind. The goal of the RT-X project is to bring together data, resources, and code to make universal robots a reality. The main participants in this project, Professor Sergey Levine of the University of California, Berkeley, and Karol Hausman, a senior scientist at Google DeepMind, jointly wrote the article “THE GLOBAL PROJECT TO MAKE A GENERAL ROBOTIC BRAIN (Global Cooperation Project to Create a Universal Robot Brain)”, which summarizes Progress on the RT-X project.


[21,000-word transcript] Latest conversation with Rabbit founder & CEO Lu Cheng|R1 is more like AI + iPod, not an iPhone killer

https://news.miracleplus.com/share_link/16773

This is the latest conversation Rabbit CEO Jesse Lyu had with Jason Calacanis, a well-known Silicon Valley angel investor, on the “This Week Startup” program after CES. The 90-minute conversation detailed his latest product thinking. Lu Cheng emphasized the evolution of technology to solve the same problems, but in a more intuitive way. He introduced in detail the working principle of LAM (Large Action Model). LAM is designed to improve efficiency and save time. It is a real time-saving device that allows users to focus more on other things. This concept is the core driver of the company. force.


Apache top-level project MXNet is retired! How did the deep learning framework founded by the great master Li Mu and preferred by Amazon go from being the “darling” of a big company to falling into the “impartiality”?

https://news.miracleplus.com/share_link/16774

Recently, the open source learning framework project MXNet of well-known deep learning technology expert Li Mu has been moved to Apache Attic because the project is inactive. Apache Attic is a project of the Apache Software Foundation that provides solutions for discontinued Apache projects. The Attic project was founded in November 2008. Retired projects can also be retained.


The U.S. National AI Research Resource Pilot Project goes online, with key basic resources donated by NASA, NVIDIA, OpenAI, etc.

https://news.miracleplus.com/share_link/16775

The National Science Foundation (NSF) has launched the National Artificial Intelligence Research Resources Pilot Project (NAIRR), which aims to ensure equitable access to basic AI resources and tools for the broad research and education community by sharing national research infrastructure. The project has received resource donations from many government agencies and private companies, including NASA, Nvidia, OpenAI, etc. NAIRR will provide datasets, AI models, software and training resources to support AI research, especially for small institutions with limited resources and underrepresented groups. The project budget is US$800 million per year for three years and is designed to maintain the United States’ international competitiveness in the field of AI technology.

© Copyright notes

Related posts

No comments

No comments...