Big Model Daily, December 9-10

News7months agorelease AIWindVane
35 0

Big Model Daily, December 9-10[Big Model Daily, December 9-10] Research is as realistic as hair and the light is adjustable. Meta launches a real-time 3D avatar synthesis method. A magnetic link sweeps the AI circle. 87GB seeds are directly open sourced to the 8x7B MoE model.


Realistic to the hair, with adjustable light, Meta launches real-time 3D avatar synthesis method

https://news.miracleplus.com/share_link/12818

In 2021, Facebook will regard the “metaverse” as the company’s main business and change the company name to Meta. However, this year, with the emergence of ChatGPT, generative AI has become a new research trend, and many technology companies have regarded generative AI as an important R&D business of the company. But Meta has never stopped researching VR/AR. Recently, Meta’s Codec Avatars Lab proposed a high-fidelity, light-adjustable virtual avatar synthesis method – Relightable Gaussian Codec Avatars.


Huawei Noah & Tsinghua: CoSeR, a large-scale super-resolution model of all things based on cognition

https://news.miracleplus.com/share_link/12819

Recently, research proposed by Tsinghua University, Huawei’s Noah’s Ark Laboratory, Hong Kong University of Science and Technology and other institutions have implemented a cognitive super-resolution framework by combining image appearance and language understanding to generate cognitive features, enabling SR models to understand Low resolution image. This article believes that a large image quality model that can be effectively applied to real scenes should have a multi-step repair capability similar to System 2, that is, based on the recognition of image content and combined with prior knowledge to achieve image super-resolution (Cognitive Super-Resolution, CoSeR).


HumanGaussian open source: Based on Gaussian Splatting, a new framework for high-quality 3D human body generation

https://news.miracleplus.com/share_link/12820

Recently, explicit neural representations of 3D Gaussian Splatting (3DGS) [2] have provided new perspectives for real-time scene reconstruction. It supports multi-granularity and multi-scale modeling and is very suitable for 3D human body generation tasks. In a recent work, the Chinese University of Hong Kong, Tencent AI Lab, Peking University, Hong Kong University, and Nanyang Technological University teams launched the latest effective and fast 3D human body generation model HumanGaussian, which introduced explicit human structure guidance and gradient normalization. Assisting the 3D Gaussian optimization process, it can generate diverse and realistic high-quality 3D human body models. Currently, both the code and model are open source.


AI turns into a photo identification master and can understand interstellar travel! Jia Jiaya team’s new work, multi-modal large model challenge super long 3-hour video

https://news.miracleplus.com/share_link/12821

The latest research results of Jia Jiaya’s team have allowed large models to directly learn to process extremely long videos. Give it a sci-fi blockbuster “Interstellar” (2 hours and 49 minutes long). After “watching” it, it can not only easily comment on the movie based on the plot and characters of the movie: it can also accurately answer questions in the play. details involved.


A magnet link sweeps the AI ​​circle, and 87GB seeds directly open source the 8x7B MoE model

https://news.miracleplus.com/share_link/12822

“High-end” open source often adopts the simplest release method. Yesterday, Mistral AI launched a magnet link on the X platform and announced new open source actions. There is no long official blog and no deliberately accelerated demo. This company can be regarded as “a breath of fresh air” in the current field of large models. When I opened it, I found a torrent of nearly 87 GB.


Google took advantage of OpenAI’s internal strife to reorganize its AI team and poach Bill Jia, the top Chinese executive in Silicon Valley

https://news.miracleplus.com/share_link/12823

In addition to secretly preparing to release Gemini unexpectedly, what is Google doing? It is also reorganizing the AI team internally, poaching senior executives externally, and working hard to rebuild its competitiveness when its opponents make mistakes. The above is the revelation that Qubit has just learned. It is said that Google first quietly shut down all departments related to AI, and then reorganized a new department code-named “Core AI”. What’s more important is that Google also hired the person in charge of Core AI from outside, a Chinese who is currently the highest-ranking Chinese among the major Silicon Valley companies: Bill Jia. That’s right, Meta’s Bill Jia. Senior Vice President of Engineering at Meta, where he oversees AI/ML infrastructure, data infrastructure, performance and capacity engineering, and hardware engineering. A more well-known achievement in the technology circle is PyTorch, one of the most popular AI frameworks currently.


OpenAI admits that GPT-4 has become lazy: it cannot be fixed for the time being

https://news.miracleplus.com/share_link/12824

OpenAI has officially responded to the increasingly serious problem of GPT-4 laziness. Still using a ChatGPT account. OpenAI: We’ve received feedback! The model hasn’t been updated since November 11th, so this is certainly not intentional. Model behavior may be unpredictable and we are investigating to fix it.


Dedicated to mobile phones and laptops, Stability.ai’s large model of open source ChatGPT gene

https://news.miracleplus.com/share_link/12825

On December 8, the famous open source generative AI platform stability.ai open sourced StableLM Zephyr 3B, a large language model with 3 billion parameters, on its official website. Zephyr 3B is designed for mobile phones, laptops and other mobile devices. It features small parameters, strong performance and low computing power consumption. It can automatically generate text, summarize and summarize, etc., which is comparable to models with 7 billion and 13 billion parameters. It is worth mentioning that the core architecture of this model comes from Zephyr 7B and is fine-tuned. Zephyr 7B is fine-tuned based on the Mistral-7B model of Mistral AI, which just received a huge financing of 3.5 billion yuan a few days ago. At the same time, GPT-3.5 was used to generate the training data set and GPT-4 was used for artificial intelligence feedback. Therefore, Zephyr 3B is a super seam monster with model genes from many major manufacturers.


Inflection AI’s chatbot Pi launches Android version

https://news.miracleplus.com/share_link/12826

Inflection AI, an artificial intelligence startup founded by DeepMind co-founder Mustafa Suleyman and LinkedIn co-founder Reid Hoffman, announced that its AI chatbot Pi is now available as an Android app.


Google updates Notebook LM AI note-taking app, adding latest Gemini Pro model

https://news.miracleplus.com/share_link/12827

Google launched Notebook LM at the I/O 2023 conference in May this year. This is an AI note-taking application that can generate summaries and other content for user notes. Registration is required to use it. Google has currently updated this AI note-taking application, mainly adding the latest Gemini Pro model, while claiming to expand the scope of application use.

© Copyright notes

Related posts

No comments

No comments...