Big Model Daily, October 25

News11months agorelease AIWindVane
66 0

Big Model Daily, October 25

[Big Model Daily, October 25] The world’s most powerful CPU changed hands overnight, a large model with 13 billion parameters was stuffed into a PC, and weekly email PPTs can be generated without an Internet connection; LeCun once again badmouthed autoregressive LLM: GPT-4 The reasoning ability is very limited, as evidenced by two papers; Apple’s “matryoshka”-style diffusion model reduces the number of training steps by 70%; net profit soared by 1763%, how the world’s third largest software company relied on AIGC to turn around


The world’s most powerful CPU changed hands overnight, a large model with 13 billion parameters was stuffed into a PC, and weekly email PPT reports could be generated even without an Internet connection

Link: https://news.miracleplus.com/share_link/11164The world’s most powerful CPU changed hands overnight. Just now, at the Snapdragon Summit, Qualcomm’s Snapdragon X Elite chip was officially unveiled. It is specially designed for PC notebooks and has set new industry records in performance and power consumption. At the press conference, there was a strong tendency to “punch Apple and kick Intel.” Qualcomm CEO even directly released the data comparison and shouted on the spot: Take photos as you like, welcome to share them!


Perplexity’s latest valuation is $500 million! IVP announced that it will lead the investment; Khosla, the first investor in OpenAI, began to “retire”, saying that AI still follows the winner-take-all approach!

Link: https://news.miracleplus.com/share_link/11165

IVP is leading the investment in Perplexity, which was last valued at about $500 million, up more than 300% from the $150 million valuation announced in March, according to a new report from The Information. It is reported that the current number of paying users of Perplexity has reached 15,000. The paid version of Perplexity allows users to find answers and summarize information in documents such as PDFs they upload, and allows them to use advanced models such as Claude and GPT-4 to generate More detailed and complex answer. Perplexity CEO Aravind Srinivas is also a former research scientist at OpenAI. He has specifically shared about Aravind Srinivas in previous content. Srinivas co-founded Perplexity with partners such as former Meta research scientist Denis Yarats (Perplexity CTO).


Net profit soared 1,763%. How did the world’s third largest software company rely on AIGC to turn around?

 

Link: https://news.miracleplus.com/share_link/11166

Recently, the internationally renowned research institute IDC released a research report on the cloud giant Salesforce. It is expected that driven by AI-driven cloud solutions, the Salesforce economy will bring more than 2 trillion US dollars to the world from 2022 to 2028. Revenue and 11.6 million jobs increased. Previously, Salesforce released its second quarter financial report for fiscal year 2024 ending July 31 on August 30. The financial report shows that the company’s second quarter net profit was US$1.267 billion, a year-on-year increase of 1763.24%. It is not difficult to see from the trend of Salesforce’s stock price that after experiencing a plunge that lasted for a whole year in 2022 and its market value was almost cut in half, since the beginning of this year, Salesforce has finally reversed its decline and its stock price has gradually recovered.


Lao Huang Su’s mother gathered at the scene! Chip giants watch the first AI PC, made by Lenovo

 

Link: https://news.miracleplus.com/share_link/11167

Here Qualcomm is holding a summit in Hawaii, and over there Nvidia and AMD have gathered in Texas. CEOs of chip giants such as Nvidia Huang Jen-Hsun and AMD Su Zifeng all came to another press conference, and their conversations were full of AI. Lenovo’s annual Tech World not only released its first AI PC, but also showcased a series of other AI new technologies and infrastructure. Lenovo Chairman and CEO Yang Yuanqing said directly: Let everyone have their own big model.


LeCun once again badmouthed autoregressive LLM: GPT-4’s reasoning ability is very limited, as evidenced by two papers

 

Link: https://news.miracleplus.com/share_link/11168

“Anyone who thinks that autoregressive LLM is already approaching human-level AI, or that it simply needs to be scaled up to reach human-level AI, must read this. AR-LLM has very limited reasoning and planning capabilities, and to solve this problem, It cannot be solved by making them larger and training with more data.” Turing Award winner Yann LeCun has always been a “sceptic” of LLM, and the autoregressive model is the learning paradigm that the GPT series of LLM models rely on. . He has publicly expressed his criticism of autoregressive and LLM more than once, and has produced many golden sentences, such as: “No one in their right mind will use autoregressive models in 5 years from now.” “Autoregressive generative models” “Auto-Regressive Generative Models suck!” “LLM has a very superficial understanding of the world.” What made LeCun cry out again recently were two newly released papers: “LLM can really do what is said in the literature. That said, self-critique (and iteratively improve) its solutions? Two new papers from our group address this issue in reasoning (https://arxiv.org/abs/2310.12397) and planning (https://arxiv.org/abs/ 2310.08118), these claims were investigated (and challenged) during the mission.”


The open source version “ChatGPT Plus” is here, which can do data analysis, plug-in calling, automatic Internet access, and implement real-world intelligence.

 

Link: https://news.miracleplus.com/share_link/11169

OpenAI ChatGPT Plus subscription fee is powerful and can realize high-level “Data Analysis” (Advanced Data Analysis), “Plug-in Call” (Plugins) and “Automatic Web Browsing” (Browse with Bing), which can be used as an important productivity tool in daily life. . However, due to commercial reasons, closed source was chosen, and researchers and developers could only use it without any way to do any research or improvement on it. Based on this, researchers from the University of Hong Kong, XLang Lab, Sea AI Lab and Salesforce jointly created an open source agent framework for real-world productivity tools – OpenAgents, and open sourced the full stack code (complete front-end and back-end , research code) to meet the needs of everyone from researchers to developers to users. OpenAgents uses technology based on “Large Language Models” (LLMs) and full-stack engineering code to try to approximate the functionality of ChatGPT Plus. The agent can execute Python/SQL code, skillfully call tools, and can also search for maps and post posts on the Internet. It goes from researching code implementation to back-end and front-end, turning it into a landing-level application that everyone can use. OpenAgents fully disclosed the technology they used and the difficulties they encountered, and completely open sourced the code, including everything from scientific research to logic code to front-end code. The code is complete, easy to expand, and can be deployed locally with one click. It provides documentation with rich use cases to help researchers and developers build their own agents and applications on the model.


Apple’s “matryoshka”-style diffusion model reduces the number of training steps by 70%!

 

Link: https://news.miracleplus.com/share_link/11170

A recent Apple research significantly improves the performance of diffusion models on high-resolution images. Using this method, the number of training steps is reduced by more than 70% for images with the same resolution. At the resolution of 1024×1024, the picture quality is directly stretched to full, and the details are clearly visible. Apple named this achievement MDM, DM is the abbreviation of Diffusion Model, and the first M stands for Matryoshka. Just like a real matryoshka doll, MDM nests low-resolution processes within high-resolution processes, and it is nested in multiple layers. The high- and low-resolution diffusion processes are carried out simultaneously, which greatly reduces the resource consumption of the traditional diffusion model in the high-resolution process.


Up to 20 times! Compress model text prompts such as ChatGPT, greatly saving AI computing power

 

Link: https://news.miracleplus.com/share_link/11171

In long text scenarios, large language models such as ChatGPT often face higher computing costs, longer delays, and worse performance. In order to solve these three major problems, Microsoft open sourced LongLLMLingua. It is reported that the core technical principle of LLMLingua is to realize the ultimate compression of “text prompts” up to 20 times. At the same time, it can accurately evaluate the relevance of the content in the prompt to the problem, eliminate irrelevant content and retain key information, so as to reduce costs and increase efficiency. Experimental results show that the performance of the prompts compressed by LongLLMLingua is 17.1% higher than the original prompts, and the number of tokens input to GPT-3.5-Turbo is reduced by 4 times. LongBench and ZeroScrolls tests showed cost savings of $28.5 and $27.4 per 1,000 samples.


DISC-FinLLM: Fudan University team releases Chinese smart financial system, using multi-expert fine-tuning framework

 

Link: https://news.miracleplus.com/share_link/11172

Fudan University Data Intelligence and Social Computing Laboratory (FudanDISC) released a large language model in the financial field – DISC-FinLLM. This model is a multi-expert smart financial system composed of four modules for different financial scenarios: financial consulting, financial text analysis, financial calculation, and financial knowledge retrieval and question answering. These modules showed clear advantages in four evaluations including financial NLP tasks, human test questions, data analysis and current affairs analysis, proving that DISC-FinLLM can provide strong support for a wide range of financial fields. The research team has open sourced the model parameters and provided detailed technical reports and data construction examples.

– Home page address: https://fin.fudan-disc.com

– Github address: https://github.com/FudanDISC/DISC-FinLLM

– Technical report: http://arxiv.org/abs/2310.15205

© Copyright notes

Related posts

No comments

No comments...