Big Model Daily on November 10

News8months agorelease AIWindVane
41 0

Big Model Daily on November 10

[Big Model Daily on November 10] It’s so crazy, GPTs is online: Ultraman performs a hand-rolling of Musk’s large model, and someone has created a third-party market (with tutorial); Ultraman invested in a former Apple employee and founded this company The company’s first AI hardware ring, supports access to ChatGPT; AMD is working on AI: it officially announced that a special event will be held on December 7, and it is expected to launch the MI300X GPU; Li Feifei’s team’s new work: a brain-controlled robot to do housework, allowing the brain-computer interface to have fewer samples learning ability


It’s crazy, GPTs is online: Ultraman performs rubbing Musk’s large model with his hands, and someone has already created a third-party market (with tutorial)

https://news.miracleplus.com/share_link/11603

OpenAI said at the developer conference a few days ago that as long as you buy a membership, you can develop your own applications based on the new version of the GPT-4 large model. Everyone is eager to try it. Early this morning, this feature called GPTs was officially opened. All ChatGPT Plus subscribers can customize GPT from top to bottom, without any coding knowledge, to build their own GPT for different tasks such as teaching, gaming or creative design. For example, OpenAI CEO Sam Altman personally demonstrated how to develop a new GPT application with the same name as Musk’s “Grok”.


Founded by former Apple employees who invested in Ultraman, the company’s first AI hardware fried ring supports access to ChatGPT

https://news.miracleplus.com/share_link/11604

The latest AI hardware AI Pin comes from Humane, a startup invested by OpenAI Ultraman, whose founders have all worked at Apple. It weighs about 55g, about the same as a tennis ball. It attaches to clothes with magnets and is said to not fall off even when running. The device has built-in motion sensors, depth sensors, etc., which can sense the motion status and surrounding environment. Supports voice and gesture interaction, and has visual recognition capabilities.


OpenAI seeks partners to generate datasets for training AI models

https://news.miracleplus.com/share_link/11605

OpenAI announced in a document that it will work with organizations to generate public/private datasets for training AI models. The data partnership aims to “enable more organizations to help guide the future of AI” and “benefit from more useful models.”


AMD is working on AI: Officially announced that a special event will be held on December 7, and MI300X GPU is expected to be launched

https://news.miracleplus.com/share_link/11606

AMD announced that it will hold a special event dedicated to AI called “Advancing AI” at 2 a.m. Beijing time on December 7. It is expected that this event may be related to the release of the MI300X data center GPU. As early as June, AMD had revealed that the MI300X series would be launched to partners in the third quarter, and MI300A would begin sampling in the second quarter. Now, AMD seems ready to go public.


Douyin tests the first graphics and text AIGC tool Dreamina, which may be used for Douyin content creation

https://news.miracleplus.com/share_link/11607

Jianying, a subsidiary of Douyin, is testing an AIGC tool called “Dreamina”, which belongs to the creative field of Wenshengtu. Users can generate four creative pictures generated by AI based on a piece of text. These pictures are generated from multiple dimensions such as abstraction and realism. In addition, users can also modify pictures and texts, such as the size ratio of generated pictures and the type of picture templates. There are two types of templates: general and animation. The general model will make the generated pictures closer to real life scenes, while the animation template will make the generated pictures closer to cartoon animation scenes.


China Telecom releases Star Semantics, a large model with 100 billion parameters, which can reduce design costs by 95%

https://news.miracleplus.com/share_link/11608

On November 10, at the 2023 Digital Technology Ecology Conference, multiple China Telecom executives took turns to launch the one-stop intelligent computing service platform “Huiju”, the large model Star Semantics and more than ten industry large models, China Telecom’s “Tianyan” quantum computing cloud platform, 5G refresh applications and many other product, platform and technology updates. The Star Semantic Large Model is an upgrade of China Telecom’s self-developed large model. The parameter level has been increased from one million to one hundred billion. After the upgrade, the four major capabilities of hallucination suppression, extrapolation window, interactive experience, and multi-round understanding have been significantly improved. In terms of technology, Star Semantics has over 1.2 billion style data, training memory is reduced by 50%, and reasoning is accelerated by 4.5 times; Chinese image understanding and generation capabilities are improved by 30%; and semantic fine-grained generation effects are improved by 25%. In terms of creative efficiency improvement, Xingchen Semantic’s production time is reduced by 92% compared with previous production tools; design costs are reduced by 95%.


It is revealed that Zhipu AI is seeking a new round of financing at a valuation of 20 billion yuan.

https://news.miracleplus.com/share_link/11609

Zhipu AI, a large domestic model company owned by Tsinghua University, is reportedly seeking a new round of financing with a valuation of RMB 20 billion. Just 20 days ago, Zhipu AI officially announced for the first time that it had completed a total of 2.5 billion yuan in financing during the year. Market rumors at the time were that as early as September, the company’s market valuation had soared to 12 billion yuan. This makes it one of the companies with the highest valuation among domestic large-model startups. As for the latest market valuation that has been revealed now, Zhipu has not yet responded.


Li Feifei’s team’s new work: brain-controlled robots do housework, giving brain-computer interfaces the ability to learn with few samples

https://news.miracleplus.com/share_link/11610

In the future, you may be able to ask a robot to do your housework for you with just a thought. The NOIR system recently proposed by the team of Wu Jiajun and Li Feifei of Stanford University allows users to control robots to complete daily tasks through non-invasive electroencephalography devices. NOIR decodes your EEG signals into a library of robotic skills. It can now complete tasks such as cooking sukiyaki, ironing clothes, grating cheese, playing tic-tac-toe, and even petting a robot dog. This modular system has powerful learning capabilities and can handle complex and varied tasks in daily life.


The Chinese team won the best paper and the best system paper, and the CoRL winning paper was released

https://news.miracleplus.com/share_link/11611

Since it was first held in 2017, CoRL has become one of the world’s top academic conferences at the intersection of robotics and machine learning. CoRL is a single-track conference for robot learning research, covering multiple topics such as robotics, machine learning and control, including theory and applications. The 2023 CoRL Conference will be held in Atlanta, USA, from November 6th to 9th. According to official data, 199 papers from 25 countries were selected for CoRL this year. Popular topics include manipulation, reinforcement learning, etc. Although compared to large AI academic conferences such as AAAI and CVPR, the scale of the CoRL conference is still relatively small. However, with the popularity of concepts such as large models, embodied intelligence, and humanoid robots this year, the relevant research at the CoRL conference is also very worthy of attention. .


Let the AI model become a GTA five-star player, the vision-based programmable intelligent agent Octopus is here

https://news.miracleplus.com/share_link/11612

In order to solve the problem of how to make large models embodied intelligence and create autonomous and situation-aware systems that can accurately formulate plans and execute commands, scholars from Nanyang Technological University in Singapore, Tsinghua University, etc. proposed Octopus. Octopus is a vision-based programmable agent that aims to learn through visual input, understand the real world, and complete various practical tasks by generating executable code. By training on large amounts of data pairs of visual input and executable code, Octopus learned how to control video game characters to complete game tasks or complete complex household activities.

© Copyright notes

Related posts

No comments

No comments...