China flat knitting machine Suppliers Design

ساخت وبلاگ

 The same way cars have to go through crash tests to ensure safety, this method offers a pre-emptive test to catch errors in autonomous systems," he explained.Findings of the study were discussed in China flat knitting machine Suppliers Design, Automation and Test meeting.Typically autonomous vehicles "lea" about the world via machine leaing systems, which are fed huge datasets of road images before they can identify objects on their own.But the system can go wrong. In the case of a fatal accident between a self-driving car and a pedestrian in Arizona last March, the software classified the pedestrian as a "false positive" and decided it didnt need to stop."We thought, clearly there is some issue with the way this perception algorithm has been trained. When a human being perceives a video, there are certain assumptions about persistence that we implicitly use: if we see a car within a video frame, we expect to see a car at a nearby location in the next video frame. This is one of several sanity conditions that we want the perception algorithm to satisfy before deployment," explained Jyo Deshmukh, co-author of the study.

For example, an object cannot appear and disappear from one frame to the next. If it does, it violates a "sanity condition," or basic law of physics, which suggests there is a bug in the perception system.The team of researchers formulated a new mathematical logic, called Timed Quality Temporal Logic, and used it to test two popular machine-leaing tools--Squeeze Det and YOLO--using raw video datasets of driving scenes.The logic successfully honed in on instances of the machine leaing tools violating "sanity conditions" across multiple frames in the video. Most commonly, the machine leaing systems failed to detect an object or misclassified an object.For instance, in one example, the system failed to recognize a cyclist from the back, when the bikes tire looked like a thin vertical line.

 Instead, it misclassified the cyclist as a pedestrian. In this case, the system might fail to correctly anticipate the cyclists next move, which could lead to an accident.Phantom objects--where the system perceives an object when there is none--were also common. This could cause the car to mistakenly slam on the breaks--another potentially dangerous move.The teams method could be used to identify anomalies or bugs in the perception algorithm before deployment on the road and allows the developer to pinpoint specific problems.The idea is to catch issues with perception algorithm in virtual testing, making the algorithms safer and more reliable. Crucially, because the method relies on a library of "sanity conditions," there is no need for humans to label objects in the test dataset--a time-consuming and often-flawed process.

chinagearrack...
ما را در سایت chinagearrack دنبال می کنید

برچسب : نویسنده : chinagearrack chinagearrack بازدید : 159 تاريخ : پنجشنبه 1 اسفند 1398 ساعت: 7:57