Polar Fox Alpha S Huawei HI customized version of autonomous driving experience equipped with 3 lidar. On the open road, I experienced the Polar Fox Alpha S equipped with Huawei ADS autopilot, such as starting and stopping at traffic lights, turning left without protection, avoiding vehicles at intersections, and courting pedestrians.
What is the level of autonomous driving technology of this car?

Overall, it’s still pretty good. Pedestrians have a deep impression on that section of narrow road where cars can avoid motorcycles. I feel that the calibration of this system is quite radical, a bit like a person driving a car. Because I have seen some cars driving before, when encountering a similar situation, most of them are slowing down and stopping. Generally, they will not turn around to avoid and overtake, but this is how it does it. Do you feel like you are being blocked by some tortoise cars or similar people and vehicles? I was anxious, I wanted to pass it, but couldn’t live it, I was always trying. However, one thing from the video is that the car doesn’t seem to turn very smoothly when changing lanes, and the car is a bit sideways. That is to say, the transfer is a little unstable, but this is just a subjective feeling of watching the video. As for the courtesy of pedestrians, I don’t think it is very special. This kind of scene will definitely be a very common scene for engineers to calibrate. On the contrary, the scene I mentioned above is more interesting. It is necessary to judge the width of the vehicle, the distance of the oncoming vehicle, and the motorcycle that suddenly enters next to it. There are more real-time changes in conditions. I’m thinking, one day if I can automatically turn around on a narrow road with heavy traffic without the driver being scolded, can complete the general congested road freely and calmly overtake without being scolded, and be able to freely and calmly avoid potholes on the bad road of the Qinghai-Tibet line. It’s not too slow. I used to think that autonomous driving was really far, but now it seems that it may not be that far anymore. No, I’m going to set up a car company called “Zanghu”, and work against it


By zhiwo

0 0 vote
Article Rating
Notify of
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
6 months ago

Seeing this video this afternoon, the first question I asked is whether this is a dedicated self-driving test car or the upcoming mass production Polar Fox model? Because in fact, if it is a non-production model, multiple lidars and high-definition There are many precedents for the combination of sensors such as cameras and radars to complete the tasks in the video, but there is still a long way to go to the market. For example, the video of Wenyuan Zhixing WeRide driving into the urban village of Wenyuan Zhixing WeRide. The scenes in the 8825 playback also have very complicated parts, and they are indeed handled. The second question is whether to use the high-precision map data. The performance of Weilai’s NOP and Xiaopeng’s NGP at high speeds is actually largely due to the relatively reliable prior information of the high-precision map, that is, the vehicle can be obtained through GPS positioning. Location, and then look for the lane information on the map. This positioning is relatively accurate, and the resulting decisions will be more reasonable. If you use the urban high-precision map information, in fact, there is still a certain distance from large-scale promotion and use. Because all high-precision maps need to be collected by map manufacturers one kilometer by one kilometer, the collection of high-precision maps in urban areas is relatively in a relatively early stage. There is no large-scale roll-out, and no large-scale use. Of course, you can also use crowdsourced maps of car companies, but this also involves the size of the fleet, which is difficult to roll out for a while. The third and most important question is when will mass production be released? If the answer to the first two questions is “No”, then the mass production time is very close. Major manufacturers may be greatly shaken, and Tesla will also be shaken three times. 2022 will be the year when lidar will shine. If you can’t catch up in a year, just follow Huawei. To borrow a word from @sunflowerzzz, Tesla is careful, there is Huawei inside, and there are many things in it.

6 months ago

It is not an exaggeration to describe it as an old driver. Just as other car companies were stacking materials, using high-power chips, and using lidar, the ADS road test video of Jihu*Huawei was released. The feeling of watching the entire video is comparable to the Robotaxi of the L4 level (the price of cars under this level is around one million, and these robotaxis are generally non-production models). Talk about a few difficulties in the video, 1. Turn left without protection at the traffic light intersection. The difficult points contained in this include accurate recognition of long-distance traffic light status, and the difficulty of traffic light recognition. How do you think about the traffic light detection and recognition function that Tesla just released? ​I have not seen a company that can open the result of traffic light recognition to the control module so that it has the function of automatically passing through the intersection. At the same time, difficulty 2 turns left without protection. It is necessary to accurately judge the trajectory and distance of the oncoming straight vehicle in order to make an effective decision whether to turn or not. 2. The emergency braking function of the vehicle. The scene in the picture below is too classic. The opposite white car turns left and the polar fox goes straight. According to reason, the opposite vehicle should let the polar fox go straight. The priority of the straight vehicle is the highest. Passing the intersection at normal speed, you are about to meet this white car. Unexpectedly, the white car suddenly turned left. Fortunately, Jihu braked in time and issued an alarm. This reflects the high recognition accuracy of the vehicle perception algorithm and accurate prediction of the vehicle trajectory, so that it can stop in time. 3. Handling of complex scenes The following two pictures are the scenes that surprised me the most and most amazed by the driving assistance capabilities of Extreme Fox. There are oncoming cars, takeaway guys who rush out at any time, and people who don’t obey the traffic rules crossing. This is simply a driver’s nightmare, let alone an autonomous vehicle. Although Jihu prompts the driver to take over, the driver is very cooperative with “no takeover”, fully verifying the performance boundary of the car. To my surprise, Jihu can cope with such a complex scene. Although the speed is slow, it should be avoided. Continue to drive, continue to drive, truly reflect the driving level of an old driver. Finally, it is said that this car is equipped with Huawei’s 3 lidars and Huawei’s overall autonomous driving solution. Huawei is beginning to show its skills. Two days ago, Huawei’s rotating chairman Xu Zhijun said that it is better than Tesla.

6 months ago

L3 from the city may only be perfected by one legal requirement! There are two points that I think are more advanced: the effect of unprotected left-turning in a mixed flow of people and vehicles is very amazing. The recognition and prediction of pedestrians/bicycles, etc. are very stable. Perception, prediction, and decision-making are all very ok. I think there is still not enough detail in control. The stopping in several scenes seems to be abrupt, compared with the conservative and gradual control style of mainstream L4 robotaxi. Big. There are also two points worth looking forward to: Can the cost of the entire solution be suppressed, visually at the hardware cost of 50,000-70,000, and perhaps additional software cost of 20,000-30,000? Therefore, a car of 200,000 to 250,000 will probably go to 300 to 350,000, which should be a fairly optimistic estimate. The degree of dependence on high-precision maps is currently not made by a supplier in urban high-precision maps, and self-built high-precision maps require a long time and surveying and mapping qualifications. The stability of the whole system, from the 7-minute video, the urban area L2.999 is no problem. If you want to move to L3, you also need to see the stability of the corner case processing by the system, and its performance in some scenes with high perceptual requirements. (Bad weather, low-light environment), and finally the improvement of the law, and then people can transfer the driving responsibility to the car. In the current bottom-up route of mass-produced cars, technology has begun to approach L3 infinitely. I personally feel that the law will definitely regulate L3 capabilities before Q4 next year (2022). The specific implementation method may require installing a compulsory driving record black box, and then stipulating the driver’s takeover ability (how long the driver must establish the takeover ability after the system sends out a specified intensity prompt). Looking forward to the moment when autonomous driving will truly move towards L3!

6 months ago

Polar Fox αS, the biggest stakeholder of this car is not BAIC, not Huawei, but Tesla! For Jihu, there is at most one more model with good sales, and subsequent models of other brands equipped with Huawei’s autopilot will also come up. By then, you will feel that Jihu is nothing special; for Huawei, buy an autopilot. 3w and 1w units are only 300 million. Huawei currently does not care too much about this number; for Tesla, the situation is completely different. Tesla’s financial report is completely unable to support the current stock price. What is it relying on? It is a technology concept. Capital believes that Tesla has unique technical barriers to traditional car companies in the automotive field, and can create products that experience complete rolling. Before Tesla, the chassis, power system, interior workmanship, sound effects, etc. supported the tonality of high-end automotive products. The high-tech it carries is the high-end part of the traditional component system such as air suspension. After Tesla came out, it redefines the support logic behind high-end automotive products. From traditional “accessories” such as air suspension to a feature strongly related to software and algorithms, autonomous driving is undoubtedly the most eye-catching part. . The confidence touted by capital is that Tesla has large barriers in automotive software and algorithm technology to support its product leadership, so it has good expectations. The self-driving effect of Extreme Fox gives people the impression that it has crushed Tesla and broke the capital’s knowledge of Tesla’s absolute technological leadership. It even told the capital that Tesla is not only not leading, but also A lot behind. As a result, the logic supporting Tesla’s high stock price collapsed. Oh, while Tesla is bad, there is also a stakeholder company, Moutai, which is good.

6 months ago

From the perspective of autonomous driving technology, talk about my views on this video and the current autonomous driving. The following content is not intended to discuss the automatic driving business model, technical route, SAE classification, etc., of course, does not involve any technology and trade secrets. Some specific technical routes and technical terms are hidden in the following answers and replaced with more popular ones. The easy-to-understand vocabulary also makes it easy for the general public to understand the current autonomous driving technology. The first is the video scene analysis from the pure algorithm perspective. The Sina Visitor System​weibo.com video covers the usual mass-produced L2 functions such as conventional car following, lane keeping, traffic signal recognition, and automatic lane changing. The following focuses on a few difficult scenarios. For the convenience of presentation, use ego to stand for self-driving self-driving cars. All the following gifs are for controlling the size of pictures and are made for playing videos at 1.5X speed, so they may look a bit…ghost…1-1 1 minute and 07 seconds: unprotected turn left/ego leader car; unprotected left Turning refers to that in the intersection area, only one traffic light controls the behavior of vehicles turning left and going straight at the same time. It will lead to a situation similar to ego, where a left-turning vehicle and the subject are going straight. Unprotected left turn is one of the very difficult scenarios for autonomous driving on local roads. It is necessary to consider the size of the intersection, ego dynamics model, the position/speed of the oncoming vehicle, whether the ego is the leader, the game space of the oncoming vehicle, etc. factor. Under normal circumstances, it is the most difficult for ego to be in the lead car. This is also easy to understand. I believe everyone who has driven a car has experience. The ego in the video happens to be located at the position of the leading vehicle, which is a more difficult unprotected left turn process. In this video, a yield strategy is adopted to avoid straight vehicles with right of way. 1-2 About 3 minutes and 18 seconds: Unprotected turn left/ego follow the line + the opposite direction close car; also unprotected left turn, this time ego is lucky not to lead the car. However, the difficulty this time is that the distance of the on-going straight car (Buick gl8) is relatively short and the speed is faster, leaving little room for ego’s decision-making. In this case, if there is Perception (position/speed/intention estimation is not accurate), it is easy to cause ego decision errors and cause danger. Different from the last time it took a concession, ego adopted a preemptive strategy on the basis of accurately judging the speed position of the oncoming straight gl8, and did not cause any interference to the oncoming vehicle. 1-3 About 5 minutes and 25 seconds: Unprotected turn left + illegally rushing to the left turning car’ (to be honest, such a tiger human driver is relatively rare… Generally speaking, at an unprotected left turn intersection, Transfer to the left and go straight. Straight-going vehicles have higher priority. Here, ego is a straight-going vehicle. In theory, a human driver who obeys traffic regulations will adopt a yield strategy. And the intersection itself is smaller. But here this human The driver and the driver are obviously…comparative…ego timely judged the intention of the oncoming car to turn left in violation of the regulations, took the brake action (27kph→6kph), and reported TOR (Take Over Request) in a very timely manner to take over Request) to remind drivers to pay attention to driving safety. This is very important in mass production. With current technical means, it is difficult for an autonomous driving company to dare to say that its system does not require human drivers to take over intervention in all scenarios (Take Over) Therefore, it is very important to timely judge the intentions of other traffic participants around, and give hazard warnings or even Take Over Request. 1-4 5 minutes and 44 seconds: avoid overtaking riders on the back side + riders on the front side and drive out (Ghost-like probe) + unstructured road + oncoming vehicles’ on unstructured roads where the lane line is not clear, and the road curb is blocked by a large number of stationary electric vehicles. Here, ego first takes the protection of a large number of stationary electric vehicles on the right side. At a certain safe distance, when overtaking riders on the back side and forward riders driving out (note that this is a not very extreme ghost probe behavior, but it is relatively close), avoid obstacles to the left and judge the trajectory of the oncoming traffic at the same time , Adjust slightly to the right. To avoid the oncoming traffic. As for the specific technical realization (I can’t say. 1-5 6 minutes 00 seconds: pedestrian crossing out (ghost probe) + human-vehicle interaction; here is the class The difference in ghost probe behavior is that the pedestrians crossing obviously have an action that stops after discovering ego, and they start in a relatively short period of time and walk through the front of ego again. This scene should be familiar to students who often drive. When we drove, we found a person ahead of us was crossing the road and stepped on the brakes. At the same time, the pedestrian saw us and stopped. Time seemed to stand still for 1 second, and you found that he did not move, he found that you did not move, so you Step on the accelerator to start, he opens his legs and moves forward, so you step on the brakes, he stops, and the wonderful Russian doll begins… This involves pedestrian intention prediction/ego and pedestrian interaction judgment, etc. As for the specific technical realization ( I can’t say… 1-6 around 7 minutes and 04 seconds: riding the line over slow cyclists + avoiding oncoming vehicles; in another answer, I compared the differences in auto-driving road conditions between China and the United States, and the auto-driving road conditions in China. One of the most complicated contributions is that there are all kinds of cyclists on the road, and I believe that students who drive will have the experience. This case is not extreme to be honest, but you can say it in a little bit. The speed and direction of the cyclist change faster, which leads to it. Trajectory prediction is more difficult. In the video scene , It can be seen that the two cyclists in the front took the road first (about 6 minutes and 57 seconds), and about 7 minutes and 05 seconds, it seemed that they found the ego following them, and the cyclists evaded to the side of the road. ego observes the location, speed, heading angle information of the oncoming vehicle, the location, speed, heading angle and other information of the cyclist, and decisively chooses to overtake. (I even quietly increased the speed: 17kph→27kph. At this time, the media teacher sister involuntarily said: tql…This is the strongest recognition of the team’s work in the past few years. To be honest, I lead Compliment me, I’m not so happy! If anyone knows the contact information of this lady in the media, please contact me………..Finally, everyone can also pay attention to it. There are a few intersections in the video. Among them, how many unprotected left turns are. Let’s talk about my understanding of the development of autonomous driving. My simple and simple views do not represent any organization or individual. I agree with the opinion of a senior: autonomous driving technology is not Internet technology. To a certain extent, it belongs to the manufacturing industry that combines software and hardware. Can this be done? In addition to having its own core technical competitiveness, it must also have a strong grasp of the quality of the industrial chain. Now the autonomous driving has passed the first two stages of a product (please note that the product here is not an Internet-concept product): The first batch of people want to dig pits, 2014-2016. Autonomous driving is still a blue ocean, a large number of People started to make robotaxi in the footsteps of Waymo, and some people spotted the robotruck track. If you do it, you can do L4. At that time, which company said you did L3, it was embarrassing to go out to meet people. The second batch of people realized the prototype technology. Filling the pit, 2016-2019. Fancy Backbone, dazzling trick, steadily increasing CVPR/AAAI submissions/receptions. This has directly led to the increase in recruitment thresholds for many autopilot companies, and the speed at which a lot of domestic masters and blogs are rising. There is no problem with drowning. Many companies have gradually realized that it is difficult to do L4 operations and regard OEM as a supplier. They gradually change their consciousness, and cooperate with OEM to obtain valuable cash flow or data. The year of 2020 has come, and the technology of autonomous driving has reached the third stage of making products: it is a comprehensive and systematic and endless polishing. Whether it is software algorithms, software engineering, software architecture, CI capabilities, laboratory protection capabilities , Hardware reliability, the trade-off between system computing power and system software capabilities, process traceability, upstream and downstream connections in the industry chain, quality control, etc.. There are still too many things to do, and this industry has long passed. The first two stages started a systematic PK. The test jade will be burnt for three days and the material identification will have to wait seven years. The road is still very long. Let’s talk about this car in the way of Q&A? (This is really not a Cargo post, this is a serious discussion about autonomous driving technology. Q: What kind of car is this? Can I buy it? A: Non-PPT, production car, BAIC Jihu. Q: When can I buy it? A: The press conference on the 17th? It should be 1-2 quarters next year. Q: What sensor configuration did you use? A: I don’t know if I can tell you this. Anyway, there is a ready-made one on the Internet. You can find it… Q: Is there any Top laser? A: No, mass production configuration, I can hardly imagine what a mass production car with a flower pot on top of it…..( Except for Weilai’s Xiaolongren style, I personally feel that it is not ugly at least… Q: Does Huawei build cars? A: Whoever said the boss said he would build a car and leave his job! Q: Which one is better between you and Tesla? A: I don’t intend to compare it with a friend, but anyone who should know a little bit about autonomous driving knows the answer. Every company has something respectable, especially Tesla. Q: How much is this car? A: I’m not sure about the details. I will see the press conference tomorrow? Should be more than 200,000 and 300,000? Q: Why is this car so expensive? A: …we have asked the next male guest to come up and ask questions

6 months ago

It can be seen from the video that Huawei’s ADS assisted driving system is perceptually fused with a variety of sensors and already has the ability to change lanes on its own and turn left without protection. There are two things that impressed me the most personally in the video: First, the Huawei ADS assisted driving system already has a strong ability to recognize traffic lights, which can achieve accurate perception and correct decision-making in some complex scenes. At all traffic light intersections in the video, Polar Fox Alpha S correctly identified the traffic lights. For example, around 2 minutes and 35 seconds of the video, Polar Fox Alpha S recognizes the red light and slows down when there is a lot of distance from the intersection. Pay attention to the UI of the central control screen. Similar traffic lights have been identified. At 3 minutes and 28 seconds in the video, the Huawei ADS assisted driving system identified a left turn red light and a straight green light far away from the intersection. Video 5 minutes 42 seconds to 6 minutes 16 seconds Polar Fox Alpha S went through a very complicated road. The Polar Fox Alpha S not only correctly recognizes pedestrians, electric vehicles and oncoming vehicles, but also strives to make evasive actions. When trying to avoid the electric car and facing the oncoming car, Polar Fox Alpha S decisively slowed down, and the whole decision-making process was quite smooth. Drivers in this kind of road conditions generally have a headache. Secondly, Huawei’s ADS assisted driving system is relatively conservative in strategy and actively reminds the driver to take over, so that some of the braking actions are slightly rough. Around 1 minute and 4 seconds in the video, when the vehicle in front had already turned left and the oncoming straight car had not started yet, Polar Fox Alpha S was already preparing to turn left. However, after discovering that the oncoming car started going straight, Polar Fox Alpha S quickly stopped the left transfer and went straight ahead, even if the braking action was slightly rough. In terms of traffic rules, it is true that straight ahead should go first, but to be honest, if it is a human driver, most of it will go first. Similar conservatism is also reflected in the video at 5 minutes and 25 seconds. When Polar Fox Alpha S was driving straight across the intersection, it also chose to brake when it judged that the car turning left did not give way. Of course, human drivers will also make similar choices in this scenario. However, I personally feel that Huawei’s ADS assisted driving system brakes more thoroughly than ordinary drivers, and it also pops up a warning to remind the driver to take over. Various details can be seen, Huawei’s ADS assisted driving system strategy is relatively conservative. I personally think that choosing a conservative strategy at this stage is a more prudent and pragmatic attitude. Here is a solemn reminder that the current automatic driving is still L2 in nature, requiring the driver to stay focused at all times and be ready to take over at any time. Of course, this video also involves some issues: 1. In the video, the panel of the Fox Central Control should be installed for the convenience of the demonstration. Are all the hardware including the lidar in the video consistent with the production car? 2. Does the autopilot effect in the demonstration video rely on high-precision maps? 3. At present, only the high-end HI version of Polar Fox Alpha S is equipped with lidar. What is the specific price? How far can the cost of Huawei’s self-developed lidar drop? When will the version with lidar be delivered in mass production? The vehicle sensor in the demonstration video is the same as the mass-produced car; the demonstration effect does not rely on high-precision maps; the price of the high-profile HI version of Polar Fox Alpha S is controlled within 350,000, and it will be delivered in the second quarter… The above are if the polar Fox Alpha S will be released soon. To be able to do it at the same time, I have to say it is indeed a blockbuster. It can even be said that Huawei has used the hardware of mass-produced vehicles to achieve an effect close to that of a professional autopilot fleet with the powerful sensing capabilities of lidar. If you can do the previous one, your performance is also very good. If only one can be achieved, the significance will be relatively small. Finally, I would like to make a summary: I said in the previous answer: There have always been two ways of thinking about how to realize autonomous driving in the true sense in the industry: the new power of car building represented by Tesla is equipped with a large number of mass-produced cars. Sensors collect data, and the goal is to gradually evolve from L2; Internet companies represented by Waymo and Baidu collect data through self-built fleets, trying to step into the ranks of L4; there are also certain differences in the technical paths of the two: the former considers the early stage The cost of lidar is too high and mass production is unrealistic. Therefore, visual recognition is the main solution. Recently, companies such as Weilai and Xiaopeng have begun to consider assembling lidar. Since the latter’s self-built fleet is relatively insensitive to cost, it has been equipped with lidar from the beginning to take the road of multi-sensor fusion. In terms of results, the two routes also have their own gains: the bottom-up new car-making faction has a large number of cars running on the road, which can not only collect massive data, but also reduce driving fatigue in some scenarios (high speed). Improve consumer experience. However, the bottom-up faction currently has no mass-produced vehicles equipped with lidar on the road, and is limited by the configuration of cost cameras, millimeter-wave radars and other sensors. Therefore, the current mileage without intervention in urban road conditions is still very limited. The factions of Internet companies that are holding high are generally setting up fleets to conduct a large number of road tests, and the mileage of autonomous driving without intervention has been amazing. For example, Baidu has achieved an average intervention of 30,000 kilometers in 2019. Now the data of companies such as Baidu and Waymo will only be stronger. However, high-ranking teams are generally equipped with extremely luxurious hardware sensors, and the cost of a car optical lidar system is generally tens of thousands or even hundreds of thousands of dollars. Such a high-cost mass-produced car is obviously unacceptable, resulting in a somewhat “high and low score” in the current performance. I personally think that with current technology, AI algorithms based on machine learning need a lot of data to be continuously fed and trained. Considering the rear risks faced by autonomous driving, the massive data of mass-produced cars is indispensable. Therefore, at the commercial level, I personally prefer a bottom-up, gradual evolution strategy. But it is precisely because of the rear risks faced by autonomous driving that the security redundancy of the vision-based solution may not be enough. Therefore, in terms of technical means, I am personally more optimistic about the multi-sensor fusion scheme. From this perspective, Chrysanthemum Factory chooses self-developed lidar and applies it to mass-produced vehicles to cut into the field of autonomous driving. I think both the timing and the means are more in line with the development trend of the industry. Judging from the self-driving road test video of Polar Fox, the improvement of lidar to autonomous driving can be described as immediate. Therefore, I personally look forward to the performance of Weilai and Xiaopeng’s autonomous driving systems after the introduction of lidar.

6 months ago

As an autopilot practitioner, I am not surprised to see this kind of video. A year or two ago, many autonomous driving companies have achieved this kind of performance, such as zoox Xiaoma Wenyuan, Baidu, etc., you can find similar demo videos, and now there is a large mobility of autonomous driving talents, and domestic autonomous driving talents are gradually After training, there are not many technical barriers in it. Those who yelled “Kneeling and watching” and “Huawei Xiongqi”, most likely do not know much about the current state of the autonomous driving industry technology. I was surprised that the people from Huawei car bu told me that they wanted to recruit 3,000 people for autonomous driving, and they had to dig out all the 1/3 of the domestic autonomous driving talents. They do a full-stack self-research of software and hardware. I knew he was not joking, I heard that my jaw was about to fall. this is too scary. The scary thing about Huawei is its organizational ability. After so many years of research and development, a reasonable system has been formed, which can be mobilized and output frantically. When the money is given, people can be digged, and a good system can make people dry and willing to do it. And it just so happens that they are still very rich and systematic. Therefore, I believe that Huawei can do a good job in autonomous driving, and our company also regards Huawei as the biggest future competitor.

6 months ago

It’s awesome, I always thought Xu Zhijun was bragging… I didn’t expect it to be a big move. Cars and machines equipped with lidar have made a qualitative leap in perception, but I did not expect that the traffic light recognition is mature enough to be used for driving in the city. It seems that Huawei is also very mature in terms of vision. From the video point of view, there is an intersection that avoids turning left and the vehicle brakes slightly. It feels that the range of the lidar scan is a bit close, and the laser cannot be recognized by the vision and the laser is not scanned, so it causes a sudden brake nearby. Turning left to grab straight ahead will mostly slow down early. If the solution can be installed on a large scale and the price is low, it will have a huge impact on the automotive industry. Fortunately, Tesla will have a greater impact on Xiaopeng and Weilai, especially Xiaopeng. Tesla’s basic sales are large, which can dilute the research and development costs of autonomous driving. Xiaopeng has the most intelligence among the new forces. NGP is almost the most powerful pilot assistance in the country at present, and it is also Xiaopeng’s comparative advantage over other car companies. Huawei’s entry has allowed traditional car companies to find reliable partners. If Huawei’s inside volume can do the conference, it will form an existence similar to the Wintel Alliance in the automotive industry. Suddenly I felt that the top match of 36w Krypton 001 was not fragrant…

6 months ago

I used to think that our company’s autopilot was a joke. Others have done it for so many years, and we have no car-related business. Why?
After being sanctioned, I even wondered where our company’s future profitability would be and whether it could stick to it. After watching this video, I feel that I still know too little about our company. There is nothing to be shaken, continue to move bricks!

6 months ago

too strong! Yesterday Xiaopeng just announced the hardware solution of xpilot3.5, and p5 became the first model equipped with lidar. It seems that Xiaopeng is once again at the forefront of mass-produced autonomous driving, and everyone is looking forward to the push of the city’s ngp in a year. Today, a video from Huawei immediately drew attention from mass-produced autonomous driving. In the second half of the video, the handling of a section of the road where the battery car goes through is amazing. This is almost the most difficult actual scene that can be encountered in daily life. If this autonomous driving solution can be mass-produced this year, Huawei will undoubtedly stand at the top of the times. Huawei is Huawei after all!

Would love your thoughts, please comment.x