Self-Driving Car Studio
A multi-disciplinary turnkey laboratory that can accelerate research, diversify teaching, and engage students from recruitment to graduation.
The Quanser Self-Driving Car Studio is the ideal platform to investigate a wide variety of research topics for teaching and academic research in an accessible and relevant way. Use it to jump-start your research or give students authentic hands-on experiences learning about the essentials of self-driving. The studio brings you the tools and components you need to test and validate dataset generation, mapping, navigation, machine learning, and other advanced self-driving concepts at home or on campus.
Product Details
At the center of the Self-Driving Car Research Studio, the QCar, is an open-architecture scaled model vehicle, powered with NVIDIA® Jetson™ TX2 supercomputer, and equipped with a wide range of sensors, cameras, encoders, and user-expandable IO.
Relying on a set of software tools including Simulink®, Python™, TensorFlow, and ROS, the studio enables researchers to build high-level applications and reconfigure low-level processes that are supported by pre-built modules and libraries. Using these building blocks, you can explore topics such as machine learning and artificial intelligence training, augmented/mixed reality, smart transportation, multi-vehicle scenarios and traffic management, cooperative autonomy, navigation, mapping and control, and more.
Dimensions | 39 x 21 x 21 cm |
Weight (with batteries) | 2.7 kg |
Power | 3S 11.1 V LiPo (3300 mAh) with XT60 connector |
Operation time (approximate) | ~2 hours 11 m (stationary, with sensors feedback) |
30 m (driving, with sensor feedback) | |
Onboard computer | NVIDIA® Jetson™ TX2 |
CPU: 1.2 GHz quad-core ARM Cortex-A57 64-bit + 1.2 GHz Dual-Core NVIDIA Denver2 64-bit | |
GPU: 256-core NVIDIA Pascal™ GPU architecture, 1.3 TFLOPS (FP16) | |
Memory: 8GB 128-bit LPDDR4 @ 1866 MHz, 59.7 GB/s | |
LIDAR | LIDAR with 2k-8k resolution, 10-15Hz scan rate, 12m range |
Cameras | Intel D435 RGBD Camera |
360° 2D CSI Cameras using 4x 160° FOV wide angle lenses, 21fps to 120fps | |
Encoders | 720 count motor encoder pre-gearing with hardware digital tachometer |
IMU | 9 axis IMU sensor (gyro, accelerometer, magnetometer) |
Safety features | Hardware “safe” shutdown button |
Auto-power off to protect batteries | |
Expandable IO | 2x SPI |
4x I2C | |
40x GPIO (digital) | |
4x USB 3.0 ports | |
1x USB 2.0 OTG port | |
3x Serial | |
4x Additional encoders with hardware digital tachometer | |
4x Unipolar analog input, 12 bit, 3.3V | |
2x CAN Bus | |
8x PWM (shared with GPIO) | |
Connectivity | WiFi 802.11a/b/g/n/ac 867Mbps with dual antennas |
2x HDMI ports for dual monitor support | |
1x 10/100/1000 BASE-T Ethernet | |
Additional QCar feautres | Headlights, brake lights, turn signals, and reverse lights (with intensity control) |
Dual microphones | |
Speaker | |
LCD diagnostic monitoring, battery voltage, and custom text support |
Vehicles
- QCar (single vehicle or vehicle fleet)
Ground Control Station
- High-performance computer with RTX graphics card with Tensor AI cores
- Three monitors
- High-performance router
- Wireless gamepad
- QUARC Autonomous license
Studio Space
- Set of reconfigurable floor panels with roadway patterns
- Set of traffic signs
Supported Software and APIs | QUARC Autonomous Software License |
Quanser APIs | |
TensorFlow | |
TensorRT | |
Python™ 2.7 & 3 | |
ROS 1 & 2 | |
CUDA® | |
cuDNN | |
OpenCV | |
Deep Stream SDK | |
VisionWorks® | |
VPI™ | |
GStreamer | |
Jetson Multimedia APIs | |
Docker containers with GPU support | |
Simulink® with Simulink Coder | |
Simulation and virtual training environments (Gazebo, QuanserSim) | |
Multi-language development supported with Quanser Stream APIs for inter-process communication | |
Unreal Engine |