January 17th Big Model Daily Collection

News5months agorelease AIWindVane
20 0
January 17th Big Model Daily Collection

[January 17th Big Model Daily Collection] Finally, submissions to the top NLP conference ACL no longer need to be anonymous; Stability AI released the Stable Code 3B model, which can be run locally without a GPU; Mollick shared a brief review of the Microsoft Copilot Pro application: a quite impressive set Impressive tool


The ICLR 2024 acceptance rate is 31%. The author of the Tsinghua LCM paper: To tell a joke, it was rejected.

 

Link: https://news.miracleplus.com/share_link/16003

The ICLR 2024 International Conference on Learning Representation has reached its twelfth session and will be held at the Vienna Convention and Exhibition Center in Austria from May 7th to 11th this year. In the machine learning community, ICLR is a relatively “young” academic summit. It is led by deep learning giants and Turing Award winners Yoshua Bengio and Yann LeCun. The first session was only held in 2013. However, ICLR quickly gained wide recognition from academic researchers and was considered the top conference on deep learning. In Google Scholar’s academic conference/magazine rankings, ICLR currently ranks tenth, higher than NeurIPS. Today, ICLR 2024 has successively notified paper submitters of their acceptance results. This conference received a total of 7262 submissions, with an overall acceptance rate of approximately 31%, the same as last year (31.8%). In addition, the proportion of Spotlights papers is 5% and the proportion of Oral papers is 1.2%.


Finally, submissions to ACL, the top NLP conference, no longer need to be anonymous.

 

Link: https://news.miracleplus.com/share_link/16004

There is good news for researchers in the field of natural language processing. Recently, the Association for Computational Linguistics Annual Conference (ACL) officially announced that the anonymity period for paper submissions to this series of conferences has been cancelled, and authors are allowed to promote their work during the submission period. The new regulations apply directly to the next review cycle. This year’s ACL is the 62nd and will be held in Bangkok, Thailand, from August 11 to 16, 2024. Starting from 2022, ACL has launched a rolling review mechanism (ACL Rolling Review, ARR), with a monthly deadline. It is important to note that papers submitted to the review process before the previous deadline are still subject to the 1-day anonymity policy.


The first open source visual-language operation large model in the field of robotics, the RoboFlamingo framework stimulates the greater potential of open source VLMs

 

Link: https://news.miracleplus.com/share_link/16005

In recent years, research on large models has been accelerating, and it has gradually demonstrated multi-modal understanding and temporal and spatial reasoning capabilities in various tasks. Various embodied operation tasks of robots naturally have high requirements for language command understanding, scene perception, and spatio-temporal planning. This naturally leads to a question: Can the capabilities of large models be fully utilized and migrated to the field of robotics? What about directly planning the underlying action sequence? OpenFlamingo was verified on the robot operation data set CALVIN. Experimental results show that RoboFlamingo only utilizes 1% of the language-annotated data to achieve SOTA performance on a series of robot operation tasks. With the opening of the RT-X data set, using open source data to pre-train RoboFlamingo and finetune it to different robot platforms will hopefully become a simple and effective large-scale robot model pipeline. The paper also tested the fine-tuning performance of VLM with different policy heads, different training paradigms and different Flamingo structures on Robotics tasks, and got some interesting conclusions.


Using large models to help programmers find bugs, the Chinese Academy of Sciences analyzed 102 papers and summarized these solutions

 

Link: https://news.miracleplus.com/share_link/16006

Due to its excellent natural language understanding, reasoning and other capabilities, large models have been applied to various scenarios and achieved unprecedented results. Similarly, the field of software testing also benefits from its powerful ability to help generate forced

Real and diversified test inputs can simulate various anomalies, accelerate the discovery of defects, improve testing efficiency, and potentially improve software quality. From Institute of Software, Chinese Academy of Sciences, Monash University, Australia, Canada

The research team at Rork University collected information issued as of October 30, 2023

The 102 related papers in the table are comprehensively analyzed from the perspectives of software testing and large models respectively, and a comprehensive review of the application of large models in the field of software testing is summarized.


Nature sub-journal | A universal chemical programming language for reproducible robotic synthesis that can be read by both chemists and robots

 

Link: https://news.miracleplus.com/share_link/16007

The amount of literature on chemical synthesis is growing rapidly; however, it takes a long time for new procedures to be shared and evaluated between laboratories. Here, a team of researchers from the University of British Columbia (UBC) in Canada and the University of Glasgow in the UK propose a method to use the General Chemistry Programming Language (XDL) to code and execute a variety of tasks on four different hardware systems in two laboratories. Synthetic procedures for chemical reactions, including reductive amination, cyclization, esterification, carbon-carbon bond formation, and amide coupling. With approximately 50 lines of code per reaction, the proposed approach uses abstraction to effectively condense chemical protocols. Different robotic platforms consistently produce the intended synthesis with yields up to 90% per step, enabling faster and safer research workflows that increase process throughput through volume rather than scale.


Stability Al releases Stable Code 3B model, which can be run locally without GPU

 

Link: https://news.miracleplus.com/share_link/16008

Stability Al, which is popular in the field of Vincent graphics, today announced its first new Al model of 2024: Stable Code 3B. As the name suggests, Stable Code 3B is a 3 billion parameter model focused on auxiliary coding tasks. Runs natively on a laptop without the need for a dedicated GPU, while still delivering competitive performance with larger models like Meta’s CodeLLaMA 7B. At the end of 2023, Stability Al began to promote the development of smaller, more compact, and more powerful models, such as the StableLM Zephyr 3B model for text generation. With the arrival of 2024, Stability Al has been non-stop releasing the first large-scale language model of 2024, Stable Code 3B, at the beginning of the year. In fact, this model released a preview version of Stable Code Alpha 3B as early as August last year. Since then, Stability Al has been Steadily improving the technology. The new version of StableCode 3B is designed for code completion and has a variety of additional features.


Shanghai AI Lab Scholar-Puyu 2.0 is officially open source, returning to the essence of language modeling

 

Link: https://news.miracleplus.com/share_link/16009

On January 17, the press conference of Scholar Puyuan 2.0 (nternLM2) and the launching ceremony of Scholar Puyuan Large Model Challenge were held in Shanghai. Shanghai Artificial Intelligence Laboratory and SenseTime, together with the Chinese University of Hong Kong and Fudan University, officially released the new generation of large language model Puyu 2.0. InternLM2 is trained on high-quality corpus of 2.6 trillion tokens. Following the settings of the first-generation scholar Puyu (nternLM), InternLM2 includes two parameter specifications of 7B and 20B as well as base and dialogue versions to meet the needs of different complex application scenarios. Adhering to the concept of “enabling innovation with high-quality open source”, Shanghai Artificial Intelligence Laboratory continues to provide free commercial license for lnternLM2.


OpenAl forms new team to collect public input to ensure Al large model is consistent with human values

 

Link: https://news.miracleplus.com/share_link/16010

On January 17th, local time in the United States, OpenAl, a leader in the field of artificial intelligence, announced on its blog on Tuesday that they are forming a new team called “Collective Alignment”. The team, made up mostly of researchers and engineers, will focus on designing and implementing a process for gathering public input to help train and shape the behavior of its AI models to address potential bias and other issues.

© Copyright notes

Related posts

No comments

No comments...