Observations are generated for each controlled car in the scene. The type of observation can be specified in the config file as ‘raw’, ‘Q-LIDAR’, or ‘bitmap’. The step function in the environment returns an observation for each controlled car in the scene as a list.


A copy of the the raw environment, giving agents full access to the scene, and all other objects in the scene. Using this observation type should be avoided, as duplicating the environment incurs a significant performance penalty.


A representation based on features a autonomus vehicle might extract from LIDAR sensors, which is the relative distance to collideable objects in the scene. This is a numpy array of distances produced by a Featurizer. The density and range of the Q-LIDAR beams can be configured in the featurizer.

class gym_urbandriving.utils.featurizer.Featurizer(config_data={}, beam_distance=300, n_arcs=9)[source]

Object to convert a state observation into a Q-LIDAR observation.


int – How far each “LIDAR” beam will project into the scene


How many “LIDAR” beams to project around the car

featurize(current_state, controlled_key, type_of_agent='controlled_cars')[source]

Returns a Numpy array of a Q-LIDAR representation of the state

  • current_state (PositionState) – State of the world
  • controlled_key – Key for controlled car in the state to generate a feature for

Return type:

Numpy array. For each ray projected into the scene, adds distance to collision, angle to collision, and velocity of intersected object


Returns a Numpy image array as generated by the visualizer, for vision-based control agents. Image is a top-down view of the intersection.