Detect Object Collisions

You can use collision detection to model physical constraints of objects in the real world
accurately, to avoid having two objects in the same place at the same time. You can use
collision detection node outputs to:

Change the state of other virtual world nodes.

Apply MATLAB® algorithms to collision data.

Drive Simulink® models.

For example, you can use geometric sensors for robotics modeling. For examples of using
collision detection, see vrcollisions and vrmaze.

Set Up Collision Detection

To set up collision detection, define collision (pick) sensors that detect when they
collide with targeted surrounding scene objects. The virtual world sensors resemble
real-world sensors, such as ultrasonic, lidar, and touch sensors. The Simulink
3D Animation™ sensors are based on X3D sensors (also supported for VRML), as described in
the X3D picking component specification. For descriptions of pick sensor output
properties that you can access with VR Source and VR Sink
blocks, see Use Collision Detection Data in Models.

PointPickSensor — Point clouds that detect which of the
points are inside colliding geometries

LinePickSensor — Ray fans or other sets of lines that
detect the distance to the colliding geometries

Here is an example of the key nodes for defining a collision detection sensor for the
robot in the vrcollisions virtual world:

The Robot_Body node has the Line_Set node as
one of its children. The Line_Set node defines the picking geometry
for the sensor.

The Collision_Sensor defines the collision detection sensor for
the robot. The sensor node pickingGeometry specifies to use the
Line_Set node as the picking geometry and the
Walls_Obstacles node as the targets for collision detection.

Sensor Collisions with Multiple Object Pick Targets

To control how a pick sensor behaves when it collides with a pick target geometry that
consists of multiple objects, use the intersectionType property.
Possible values are:

GEOMETRY – The sensor collides with union of individual
bounding boxes of all objects defined in the pickTarget field. In
general, this setting produces more exact results.

BOUNDS – (Default) The sensor collides with one large bounding
box construed around all objects defined in the pickTarget
field.

In the vrcollisions example, the LinePickSensor
has the intersectionType field set to GEOMETRY. This
setting means that the sensor that is inside the colliding geometry (consisting of the
room walls), does not collide with the union of walls. A collision takes place only if
sensor rays touch any of the walls. If the intersectionType is set to
BOUNDS, collision detection works only for a sensor that approaches
the room from the outside. The whole room is wrapped into one large bounding box that
interacts with the sensor.

Make Picking Geometry Transparent

You can make the picking geometry used for a pick sensor invisible in the virtual
world. For the picking geometry, in its Material node, set the
Transparency property to 1. For example, in the
vrcollisions virtual world, for the
Collision_Sensor picking geometry node (Line_Set),
in the Materials node, change the Transparency
property to 1.

Avoid Impending Collisions

To avoid an impending collision (before the collision actually occurs), you can use
the pickedRange output property for a
LinePickSensor. As part of the line set picking geometry, define one or
more long lines that reflect your desired amount of advance notice of an impending
collision. You can make those lines transparent. Then create logic based on the
pickedRange value.

Use Collision Detection Data in Models

The isActive output property of a sensor becomes
TRUE when a collision occurs. To associate a model with the virtual
reality scene, you can use a VR Source block to read the sensor
isActive property and the current position of the object for which the
sensor is defined. You can use a VR Sink block to define the behavior of the
virtual world object, such as its position, rotation, or color.

For example, the VR Source block in the top left of the vrcollisions
Simulink model gets data from the associated virtual world.

In the model, select the VR Source block, and then in the Simulink
3D Animation Viewer, select Simulation > Block parameters. This image shows some of the key selected properties.

For the LinePickSensorPointPickSensor, and PrimitivePickSensor, you can
select these output properties for a VR Source block:

enabled – Enables node operation.

Note

The enabled property is the only property that you can select with a VR
Sink block.

isActive – Indicates when the intersecting object is picked by
the picking geometry.

pickedPoint – Displays the points on the surface of the
underlying PickGeometry that are picked (in local coordinate
system).

This website uses cookies to improve your user experience, personalize content and ads, and analyze website traffic. By continuing to use this website, you consent to our use of cookies. Please see our Privacy Policy to learn more about cookies and how to change your settings.