November 1 Large Model Daily Collection

News7months agoupdate AIWindVane
43 0

November 1 Large Model Daily Collection[November 1 Large Model Daily Collection] The world’s most powerful long text large model, which can read 350,000 Chinese characters at a time: Baichuan2-192K is online; Google’s AlphaFold model has achieved a major breakthrough! Can predict biomolecules and ligands; Stanford Ma Tengyu started a business, working in the direction of large models, Manning, Chris Re, etc. are consultants; OpenAI sneaks into hacker group chats! The stolen ChatGPT was replaced by “Meow Meow GPT”, netizen: absolute legend; Twitter: LeCun talks about Llama-2 open source: Meta leadership believes that the benefits of open release Llama-2 will far outweigh its risks, and is expected to improve the entire AI Field; Thesis: Defining a New Natural Language Processing (NLP) Playground


The world’s most powerful long text model, which can read 350,000 Chinese characters at a time: Baichuan2-192K is online

 

Link: https://news.miracleplus.com/share_link/11345

Domestic large-model startups are creating new records at the forefront of technology. On October 30, Baichuan Intelligent officially released the Baichuan2-192K long window large model, which increased the length of the large language model (LLM) context window to 192K tokens in one fell swoop. This is equivalent to letting the large model process about 350,000 Chinese characters at a time, and the length reaches 14 times that of GPT-4 (32K tokens, about 25,000 words) and 4.4 times that of Claude 2.0 (100K tokens, about 80,000 words). In other words, Baichuan2-192K can read “The Three-Body Problem 2” in one go and is the large model with the longest processing context window in the world. In addition, it also significantly leads its opponents in multiple dimensions such as text generation quality, context understanding, and question and answer capabilities. What exactly can a large model that can understand very long text do at once? Baichuan Intelligence conducted a simple demonstration. Upload the entire PDF file of “Three-Body 2: Dark Forest”. The Baichuan large model counts 300,000 words. Next, if you ask any question about the novel, the big model can give you a concise and accurate answer.


Google AlphaFold model ushered in a major breakthrough! Can predict biomolecules and ligands

 

Link: https://news.miracleplus.com/share_link/11346

On November 1, DeepMind, an AI research institute owned by Google, released on its official website the latest technological progress of the protein structure prediction model AlphaFold: it has significantly improved the prediction accuracy and expanded the coverage from proteins to other biomolecules, including ligands. (Small molecule). It is reported that AlphaFold can predict almost all molecules in the Protein Database (PDB), including ligands, proteins, nucleic acids (DNA and RNA), and molecules containing post-translational modifications (PTM), reaching laboratory-level atomic accuracy. Essential for medical research.


Stanford Ma Tengyu started his business in the direction of large models. Manning, Chris Re and others are consultants.

 

Link: https://news.miracleplus.com/share_link/11347

On October 31, Ma Tengyu, an alumnus of the Yao Class of 2012 at Tsinghua University and currently an assistant professor at Stanford University, announced his entrepreneurial news on social media and established Voyage AI – a company dedicated to building embedding/vectorization models to help large language models (LLM) obtain Startup for better search quality. Voyage AI co-founder and CEO Ma Tengyu said that the Voyage team is composed of a group of talented artificial intelligence researchers, including Stanford University professors and Ph.D.s from Stanford University and MIT. The company hopes to empower customers to build better RAG applications and also provides customized services to increase the accuracy of customers’ LLM products by “10-20%.” Search Enhanced Generation, commonly known as RAG, is a powerful chatbot Design pattern where the retrieval system fetches authenticated sources/documents relevant to the query in real time and feeds them into a generative model (e.g. GPT-4) to generate a response. We know that a chatbot’s effectiveness depends on the accuracy and relevance of the documents it extracts. If what it retrieves includes, in addition to the exact information, other irrelevant information, the LLM may be hallucinating. Embeddings, as representations or “indexes” of documents and queries, are responsible for ensuring that retrieved documents contain information relevant to the query, and also directly affect the quality of the RAG. With high-quality retrieval data, RAG ensures that the responses generated are not only intelligent, but also contextually accurate and informed. On the day of the official announcement, Voyage AI also released a new state-of-the-art embedding model and API (better than OpenAI). According to the official website, the company’s academic advisors include Stanford University professor Christopher Manning, Stanford University associate professor Christopher Ré, and Stanford University’s first Sequoia Chair Professor Li Feifei.


Signed! Intel, Lenovo, and iQiyi take the lead in accelerating the implementation of AI PCs

 

Link: https://news.miracleplus.com/share_link/11348

On the afternoon of October 31, a tripartite memorandum cooperation conference between Intel Corporation, Lenovo Group, and iQiyi was held in Beijing. The three companies signed a memorandum of cooperation, aiming to work together to accelerate the implementation of AI transformation on the application side and bring users a new advanced AI intelligent experience. The signing of the memorandum means that the three parties will give full play to their strengths and carry out in-depth cooperation in the field of AI PC.


AMD’s revenue returned to growth in the third quarter, net profit increased by 353%, and it fully started to compete with Nvidia in the AI ​​market.

 

Link: https://news.miracleplus.com/share_link/11349

Benefiting from the rebound in the PC market, chip company Advanced Micro Devices (AMD) has returned to growth. The company’s recently released third-quarter financial report showed that revenue during the period reached US$5.8 billion, a year-on-year increase of 4%, ending the downward trend for two consecutive quarters. In terms of profitability, AMD’s net profit in the third quarter was US$299 million (GAAP), a year-on-year increase of 353%; Non-GAAP net profit was US$1.135 billion, a year-on-year increase of 4%. Looking at specific businesses, AMD’s data center division’s revenue in the third quarter was US$1.598 billion, basically the same as the same period last year; operating profit was US$306 million, a year-on-year decrease of 39%. The business mainly includes cloud computing and large enterprise client server chips, and flat revenue shows market demand remains stable.


Andrew Ng joins Turing Award winner discussion: “AI extinction theory” will do more harm than good

 

Link: https://news.miracleplus.com/share_link/11350

Regarding the series of debates caused by the “AI extinction theory”, bigwigs from all walks of life are quarreling again. Joining the fray this time is Andrew Ng, a well-known artificial intelligence scholar and professor of computer science at Stanford University. Prior to this, the debate between the three giants of deep learning, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, had already begun. Hinton and Bengio basically agree on the same view. They believe that the supervision of AI technology needs to be strengthened. For example, just a few days ago, Hinton and Bengio issued a joint letter “Managing Artificial Intelligence Risks in an Era of Rapid Development”, calling for the development of AI technology. Before the system is deployed, researchers should take urgent governance measures to focus on safety and ethical practices, and call on governments to take action to manage the risks posed by AI.


Vivo’s self-developed large model/operating system is now available! The blue heart model supports the latest OriginOS4

 

Link: https://news.miracleplus.com/share_link/11351

Just now, vivo’s self-developed universal large model matrix has been officially released! There are 5 models in total, taking into account both devices and clouds: 1B large device-side model; 7B device-cloud dual-use model; 70B main cloud model; 130B large cloud model; 175B large cloud model. Among them, version 7B will be open source, making vivo the first mobile phone manufacturer to open source a large model. And vivo has also implemented the 13B version on the device side, which is also the first in the industry. This means that large models with 13 billion parameters can be played directly on mobile phones. Based on its self-developed large model capabilities, vivo has launched two large model products: Lan Xin Qianxun natural language dialogue robot and Lan Xin Xiao V assistant. The former has the capabilities of natural conversation, text creation, image generation, code writing, etc., and will be available in major application malls. The latter is a system-level AI system assistant built into vivo’s latest release of OriginOS4. Not only does it have smooth conversation capabilities, it can also process images, such as “invisible” passers-by in photos taken.


GPT-4 writes the code, DALL·E 3+MJ handles the graphics, and the AI version of “Angry Pumpkin” is coming

 

Link: https://news.miracleplus.com/share_link/11352

Since the rise of the GPT series dialogue models and Vincentian large models such as DALL・E and Midjourney, hard-core and interesting second-generation applications based on them have emerged frequently, allowing ordinary people to experience the charm of large models firsthand. Another such game project caught our attention today. Twitter user @javilopen used GPT-4, DALL・E 3 and Midjourney to write the mini-game “Angry Pumpkin” (PS: Any similarities are purely coincidental). GPT-4 is responsible for all coding work, and DALL・E 3 and Midjourney for the graphics part.


OpenAI sneaks into hacker group chat! The stolen ChatGPT was replaced with “Meow Meow GPT”, netizen: Absolute legend

 

Link: https://news.miracleplus.com/share_link/11353

How will OpenAI respond when ChatGPT is “hacked” by hackers? Cut off the API and prevent them from using it? No no no. The approach taken by these geeks can be described as an eccentric approach – a backhand shot of “Infernal Affairs”. Although OpenAI did a lot of security testing before releasing ChatGPT, after opening the API, it still couldn’t prevent some hackers with ulterior motives from using it to cause trouble. Then one day, an engineer in the team suddenly discovered that the traffic on the ChatGPT endpoint was abnormal; after some investigation, it was determined that there was a high probability that someone was reverse engineering the API (pirated API). However, OpenAI did not choose to stop these hackers immediately, because if the team did so, the hackers would immediately notice something unusual and then change their strategies and continue the attack. At this time, a member of the team came up with a clever idea: we made it “catGPT”, and each token was “meow”…


The AI product that is more profitable than ChatGPT is…

 

Link: https://news.miracleplus.com/share_link/11354

The number of downloads of ChatGPT on mobile devices far exceeds that of other similar products, and its revenue is also far ahead. However, the truth is that these data are not enough to make ChatGPT rank first on the list of the most profitable AI applications. ChatGPT mobile has been growing in downloads and revenue since its launch in May. According to Apptopia data, ChatGPT reached 3.9 million downloads on the iOS platform alone in the first month, and grew to 15.1 million in June. Although there was a decline in July due to summer vacation, it returned to 23 million+ downloads in September. quantity. In terms of user usage data, ChatGPT’s mobile usage has also increased accordingly, with MAU increasing from 1.34 million in May to 38.88 million in September, with revenue reaching nearly $2.39 million as of October 24.


European version of OpenAI seeks $300 million in financing

 

Link: https://news.miracleplus.com/share_link/11355

European AI startup Mistral plans to raise another $300 million from investors, just four months after raising its first round of funding, having previously raised $113 million in a seed round led by Lightspeed Venture Partners. According to foreign media reports, Mistral’s current round of financing is expected to value the Paris-based startup at more than $1 billion; according to Mistral’s seed round business plan, Mistral plans to donate 1% of the financing funds to a company focused on Non-profit foundation developed by the open source community. Mistral, co-founded by former DeepMind research scientist Mensch along with Timothée Lacroix and Guillaume Lample, was valued at $260 million in a seed round in June. Mistral is currently developing an open source LLM and positions itself as “Europe’s OpenAI”; in addition, Mistral’s products also comply with stricter European regulations, such as the European Union AI Act, and emphasize privacy and security.

© Copyright notes

Related posts

No comments

No comments...