Big Model Daily on December 12

News5months agoupdate AIWindVane
24 0
Big Model Daily on December 12

[Big Model Daily on December 12] Research 10 years ago, the word2vec classic paper was scheduled for today’s NeurIPS Time Test Award Industry AI trends in 2024 Look at this picture, LeCun: Open source large models should surpass closed source


10 years ago, the word2vec classic paper was scheduled for today’s NeurIPS Time Test Award

https://news.miracleplus.com/share_link/12987

NeurIPS is one of the most prestigious AI academic conferences in the world. Its full name is Neural Information Processing Systems, which is usually hosted by the NeurIPS Foundation in December every year. The content discussed at the conference includes deep learning, computer vision, large-scale machine learning, learning theory, optimization, sparse theory and many other subdivisions. On December 10, NeurIPS 2023 kicked off in New Orleans, Louisiana, USA. According to data published on the official website blog, the number of paper submissions received by this year’s conference set a new record, reaching 13,321. They were reviewed by 1,100 field chairs, 100 senior field chairs, and 396 ethics reviewers, of which 3,584 papers were accepted. . Just now, NeurIPS officially announced the winning papers for 2023, including time test awards, two outstanding papers, two outstanding paper runner-ups, an outstanding data set and an outstanding benchmark. Most of the papers are around large language models ( LLM). It is worth noting that the word2vec-related paper published ten years ago won the Time Test Award, which is well deserved.


ChatGPT is getting lazier and lazier, and it has learned to PUA humans instead.

https://news.miracleplus.com/share_link/12988

Users claim that when using the GPT-4 or ChatGPT API recently, responses become very slow and perfunctory during peak hours. In some cases it refuses to answer, while in others the conversation is interrupted if a series of questions are asked. According to reports, the above issue occurs if a user requests GPT-4 to write a piece of code. It might just provide some information and then guide the user to fill in the rest. Sometimes GPT-4 will tell people “you can do this yourself.”


When GPT-4V acts as a robot brain, you may not even know how to plan with AI.

https://news.miracleplus.com/share_link/12989

GPT-4V can already help us design website code and control browsers. These applications are concentrated in the virtual digital world. If we bring GPT-4V into the real world and use it as the brain to control robots, what interesting results will there be? Recently, researchers from the Interdisciplinary Information Institute of Tsinghua University proposed the “ViLa” algorithm, which allows GPT-4V to enter the physical world and provide task planning for robots to operate daily objects.


Use living human brain cells to build an AI system! Speech recognition has been successful and unsupervised learning is possible|Nature sub-journal

https://news.miracleplus.com/share_link/12990

An AI system composed of “mini brains” built from real human brain cells and microelectrodes has been able to perform speech recognition – accurately identifying a specific person’s voice from hundreds of sound clips. This particular AI system was even capable of unsupervised learning: the researchers simply played the audio clip over and over again, without providing any form of feedback to tell the system whether it got it right or wrong. In the end, after two days of training, the system’s accuracy increased directly from the initial 51% to 78%.


Ali’s latest research: A face can dance with just one sentence, and the costume and background can be changed at will!

https://news.miracleplus.com/share_link/12991

After AnimateAnyone, Ali has another popular research paper. This time, just a photo of your face and a one-sentence description can make you dance anywhere! All you need to do is “feed” a portrait and a prompt: and as the prompt changes, the character’s background and clothes will also change. This is Ali’s latest research – DreaMoving, which focuses on letting anyone dance at any time and anywhere.


Let large models control drones, Beihang team proposes new architecture of embodied intelligence

https://news.miracleplus.com/share_link/12992

Entering the multi-modal era, large models can also control drones! As long as the vision module captures the starting conditions, the “brain” of the large model will generate action instructions, and then the drone can execute them quickly and accurately. Researchers such as Professor Zhou Yaoming’s team from Beihang University’s Intelligent UAV Team proposed an embodied intelligence architecture based on a multi-modal large model. Currently, this architecture has been applied to the control of drones.


Looking at this picture of AI trends in 2024, LeCun: Open source large models must surpass closed source

https://news.miracleplus.com/share_link/12993

2023 is almost over. Over the past year, various large models have been released. While technology giants such as OpenAI and Google are competing, another “power” is quietly emerging – open source. The open source model has always been questioned. Are they as good as proprietary models? Can it match the performance of proprietary models? So far, we’ve been able to say that we’re only somewhat close. Even so, open source models will always bring us empirical performance, which makes us admire with admiration. The rise of the open source model is changing the game. Meta’s LLaMA series, for example, is gaining popularity for its rapid iteration, customizability, and privacy. These models are being rapidly developed by the community, creating a powerful challenge to proprietary models and capable of changing the competitive landscape of large technology companies. But in the past, most of people’s ideas only came from “feelings”. This morning, Meta’s chief AI scientist and Turing Award winner Yann LeCun suddenly lamented: “Open source artificial intelligence models are on the road to surpassing proprietary models.


Meta open source latest model – Llama Guard-7b

https://news.miracleplus.com/share_link/12994

Meta, the global social and technology giant, has open sourced a new model – Llama Guard on its official website. It is reported that Llama Guard is an input and output protection model based on Llama 2-7b, which can classify questions and replies during human-machine conversations to determine whether there are risks. Can be used with models such as Llama 2, greatly improving its safety. Llama Guard is also an important part of the input and output protection links in the “Purple Llama” security assessment project launched by Meta. This is also the first model to distinguish user and AI risks in input and output protection.

© Copyright notes

Related posts

No comments

No comments...