必威体育Betway必威体育官网
当前位置:首页 > IT技术

巴斯勒工业相机(acA2500-20gm)参数

时间:2019-08-10 22:42:08来源:IT技术作者:seo实验室小编阅读:68次「手机版」
 

巴斯勒

Acquisition Frame Rate

The Acquisition Frame Rate camera feature allows you to set an upper limit for the camera's frame rate.

This is useful if you want to operate the camera at a constant frame rate in free run image acquisition.

How It Works

If the Acquisition Frame Rate feature is enabled, the camera's maximum frame rate is limited by the value you enter for the acquisition frame rate parameter.

For example, setting an acquisition frame rate of 20 frames per second (fps) has the following effects:

  • If the other factors limiting the frame rate allow a frame rate ofmore than 20 fps, the frame rate will be kept at a constant frame rate of 20 fps.
  • If the other factors limiting the frame rate only allow a frame rate of less than 20 fps, the frame rate won't be affected by the Acquisition Frame Rate feature.

  To determine the actual frame rate, use the Resulting Frame Rate feature.

Setting the Acquisition Frame Rate

  1. Set the AcquisitionFrameRateEnable parameter to true.
  2. Set the AcquisitionFrameRateAbs parameter to the desired upper limit for the camera's frame rate in frames per second.
// Set the upper limit of the camera's frame rate to 30 fps
camera.Parameters[PLCamera.AcquisitionFrameRateEnable].SetValue(true);
camera.Parameters[PLCamera.AcquisitionFrameRateAbs].SetValue(30.0);

Acquisition Mode

The Acquisition Mode camera feature allows you to choose between single frame or continuous image acquisition.

Available Acquisition Modes

In Single Frame acquisition mode, the camera will acquire exactly one image. After the Acquisition Start command has been executed, the camera waits for trigger signals. When a Frame Start trigger signal has been received and an image has been acquired, the camera switches off image acquisition. To acquire another image, you must execute the Acquisition Startcommand again.

In Continuous acquisition mode, the camera continuously acquires and transfers images until acquisition is switched off. After the Acquisition Start command has been executed, the camera waits for trigger signals. The camera will continue acquiring images until an Acquisition Stop command is executed.

// configure single frame acquisition on the camera
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.SingleFrame);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for a trigger signal.
// When a Frame Start trigger signal has been received,
// the camera executes an Acquisition Stop command internally.
// Configure continuous image acquisition on the camera
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.Continuous);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for trigger signals.
// (...)
// Switch off image acquisition
camera.Parameters[PLCamera.AcquisitionStop].Execute();

Acquisition Start and Stop

Before a camera can start capturing images, image acquisition has to be switched on first. Otherwise, the camera wouldn't react to incoming trigger signals.

After the AcquisitionStop command has been executed, the following occurs:

  • If the camera is not currently acquiring a frame, the change becomes effective immediately.
  • If the camera is currently exposing a frame, exposure is stopped and readout is started. The readout process will be allowed to finish. Afterwards, image acquisition is switched off.
  • If the camera is currently reading out image data, the readout process will be allowed to finish. Afterwards, image acquisition is switched off.
// Configure continuous image acquisition on the cameras
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.Continuous);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for trigger signals.
// (...)
// Switch off image acquisition
camera.Parameters[PLCamera.AcquisitionStop].Execute();

Acquisition Status

The Acquisition Status camera feature allows you to determine whether the camera is waiting for trigger signals. This is useful if you want to optimize triggered image acquisition and avoid overtriggering.

Basler strongly recommends to only use the Acquisition Status feature when the camera is configured for software triggering. When the camera is configured for hardware triggering, Basler recommends to monitor the camera's Trigger Wait signals  instead.

To determine if the camera is currently waiting for trigger signals:

  1. Set the AcquisitionStatusSelector parameter to the desired trigger type. For example, if you want to determine if the camera is waiting for Frame Start trigger signals, set the AcquisitionStatusSelector to FrameTriggerWait.
  2. Get the value of the AcquisitionStatus parameter.

If the AcquisitionStatus parameter is true, the camera is waiting for a trigger signal of the trigger type selected. If the AcquisitionStatus parameter is false, the camera is busy.

// Specify that you want to determine if the camera is waiting for Frame Start trigger signals
camera.Parameters[PLCamera.AcquisitionStatusSelector].SetValue(PLCamera.AcquisitionStatusSelector.FrameTriggerWait);
// Get the acquisition status
bool isWaitingForFrameStart = camera.Parameters[PLCamera.AcquisitionStatus].GetValue();
if(isWaitingForFrameStart){    
    // It is now safe to APPly Frame Start trigger signals
}

Action Commands

The Action Commands camera feature allows you to execute actions on multiple cameras at roughly the same time by using a single broadcast protocol message.

If you want to execute actions on multiple cameras at exactly the same time, use the scheduled Action Commands feature instead.

You can use action commands to perform the following tasks:

  • Synchronously acquire images with multiple cameras
  • Synchronously reset the frame counter on multiple cameras

Action commands are broadcast protocol messages that you can send to multiple devices in a GigE network.

Each action protocol message contains the following information:

  • Action device key
  • Action group key
  • Action group mask
  • Broadcast address (optional, default: 255.255.255.255)

If the camera is within the specified network segment and if the protocol information matches the action command configuration in the camera, the camera executes the corresponding action.

Action Device Key

A 32-bit number of your choice used to authorize the execution of an action command on the camera. If the action device key on the camera and the action device key in the protocol message are identical, the camera executes the corresponding action. The device key is write-only; it can't be read out of the camera.

Action Group Key

A 32-bit number of your choice used to define a group of devices on which an action should be executed. Each camera can be assigned to one group only. If the action group key on the camera and the action group key in the protocol message are identical, the camera will execute the corresponding action.

Action Group Mask

A 32-bit number of your choice used to filter out a sub-group of cameras belonging to a group of cameras. The cameras belonging to a sub-group execute an action at the same time.

The filtering is done using a logical bitwise AND operation on the group mask number of the action command and the group mask number of a camera. If both binary numbers have at least one common bit set to 1 (i.e., the result of the AND operation is non-zero), the corresponding camera belongs to the sub-group.

Example: Assume that A group of six cameras is installed on an assembly line. To execute actions on specific sub-groups, the following group mask numbers have been assigned to the cameras (sample values):

Camera Group Mask Number (Binary) Group Mask Number (Hexadecimal)
1 000001 0x1
2 000010 0x2
3 000100 0x4
4 001000 0x8
5 010000 0x10
6 100000 0x20

In this example, an action command with an action group mask of 000111 (0x7) executes an action on cameras 1, 2, and 3. And an action command with an action group mask of 101100 (0x2C) executes an action on cameras 3, 4, and 6.

Broadcast Address

A string variable used to define where the action command will be broadcast to. When using the pylon API, the broadcast address must be in dot notation, e.g., "255.255.255.255" (all adapters), "192.168.1.255" (all devices in a single subnet 192.168.1.xxx), or "192.168.1.38" (a single device). This parameter is optional. If omitted, "255.255.255.255" will be used.

Example Setup

The following example setup will give you an idea of the basic concept of action commands. To analyze the movement of a horse, a group of cameras is installed parallel to a race track.

When the horse passes, four cameras (subgroup 1) synchronously execute an action (image acquisition in this example). As the horse advances, the next four cameras (subgroup 2) synchronously capture images. One after the other, the subgroups continue in this fashion until the horse has reached the end of the race track. The resulting images can be combined and analyzed in a subsequent step.

In this sample use case, the following must be defined:

  • A unique device key to authorize the execution of the synchronous image acquisition. The device key must be configured on each camera and it must be the same as the device key for the action command protocol message. To define the device key, use the action device key.
  • The group of cameras in a network segment that is addressed by the action command (in this example: group 1). To define the groups, use the action group key.
  • The subgroups in the group of cameras that capture images synchronously (in this example: subgroups 1, 2, and 3). To define the subgroups, use the action group mask.

Using Action Commands

Configuring the Cameras

To configure the cameras so that they are able to receive action commands and perform one or more of the supported tasks: The same procedure applies if you want to configure Scheduled Action Commands on your cameras.

  1. Make sure that the following requirements are met:
    • All cameras on which you want to configure action commands must be installed and configured in the same network segment.
    • The Action Commands feature is supported by all cameras and by the Basler pylon API you are using to configure and send action commands.
  2. Open the connection to one of the cameras that you want to control using action commands.
  3. If you want to use the Action Command feature to synchronously acquire images:
    1. Set the TriggerSelector parameter to FrameStart.
    2. Set the TriggerMode parameter to On.
    3. Set the TriggerSource parameter to Action1.
  4. If you want to use the Action Command feature to synchronously reset the frame counter:
    • Set the CounterResetSource parameter to Action1.
  5. Configure the following action command-specific parameters:
    • ActionDeviceKey
    • ActionGroupKey
    • ActionGroupMask
  6. Repeat steps 2 to 6 on all cameras.

Issuing an Action Command

To issue an action command, call the IssueActionCommand method in your application.

Example:

Additional Parameters

  • ActionCommandCount: Determines how many different action commands can be assigned to the device. Currently, this number is limited to 1 for all Basler cameras. 
  • ActionSelector: Specifies the action command to be configured. 

// Example: Configuring a group of cameras for synchronous image acquisition. It is assumed that the "cameras" object is an instance of CBaslerGigEInstantCameraArray.
//--- Start of camera setup ---
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
    // Open the camera connection
    cameras[i].Open();
    // Configure the trigger selector
    cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
    // Select the mode for the selected trigger
    cameras[i].TriggerMode.SetValue(TriggerMode_On);
    // Configure the source for the selected trigger
    cameras[i].TriggerSource.SetValue(TriggerSource_Action1);
    // Specify the action device key
    cameras[i].ActionDeviceKey.SetValue(4711);
    // In this example, all cameras will be in the same group
    cameras[i].ActionGroupKey.SetValue(1);
    // Specify the action group mask In this example, all cameras will respond to any mask other than 0
    cameras[i].ActionGroupMask.SetValue(0xffffffff);
}
//--- End of camera setup ---
// Send an action command to the cameras
GigeTL->IssueActionCommand(4711, 1, 0xffffffff, "192.168.1.255");

Auto Functions

Auto functions are particularly useful to maintain good image quality when imaging conditions change frequently. Most auto functions are the automatic counterparts to setting a parameter manually. For example, the Gain Auto feature controls the GainRaw parameter automatically within specified limits.

  • Gain Auto
  • Exposure Auto
  • Balance White Auto
  • Pattern Removal Auto

The inpidual auto functions can be used at the same time. If you are using Exposure Auto and Gain Auto at the same time, you can use the Auto Function profile feature to specify how the effects of gain and exposure time are balanced. The pixel data for the auto functions can come from one or multiple Auto Function ROIs. To operate properly, at least one Auto Function ROI must be assigned to each auto function.

Auto Function Profile

The Auto Function Profile camera feature allows you to specify how gain and exposure time are balanced when the camera is making automatic adjustments.

To set the auto function profile, set the AutoFunctionProfile parameter to one of the following values:

  • GainMinimum 
    • minimize Gain (= Gain Minimum)
    • Gain is kept as low as possible during the automatic adjustment process.
  • ExposureMinimum
    • Minimize Exposure Time (= Exposure Minimum)
    • Exposure time is kept as low as possible during the automatic adjustment process.
  • Smart (if available)
    • Gain is kept as low as possible and the frame rate will be kept as high as possible during automatic adjustments.

      This is a four-step process:

    • The camera adjusts the exposure time to achieve the target brightness value.
    • If the exposure time must be increased to achieve the target brightness value, the camera increases the exposure time until a lowered frame rate is detected.
    • If a lowered frame rate is detected, the camera stops increasing the exposure time and increases gain until the AutoGainRawUpperLimit value is reached.
    • If the AutoGainRawUpperLimit value is reached, the camera stops increasing gain and increases the exposure time until the target brightness value is reached. This results in a lower frame rate.
  • Antiflicker50Hz (if available)
  • AntiFlicker60Hz (if available)
    • Gain and exposure time are optimized to reduce flickering. If the camera is operating in an environment where the lighting flickers at a 50-Hz or a 60-Hz rate, the flickering lights can cause significant changes in brightness from image to image. Enabling the anti-flicker profile may reduce the effect of the flickering in the captured images.

      Choose the frequency (50 Hz or 60 Hz) according your local power line frequency (e.g., North America: 60 Hz, Europe: 50 Hz).

// Set the auto function profile to Gain Minimum
camera.Parameters[PLCamera.AutoFunctionProfile].SetValue(PLCamera.AutoFunctionProfile.GainMinimum);
// Set the auto function profile to Exposure Minimum
camera.Parameters[PLCamera.AutoFunctionProfile].SetValue(PLCamera.AutoFunctionProfile.ExposureMinimum);
// Enable Gain and Exposure Auto auto functions and set the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);
camera.Parameters[PLCamera.ExposureAuto].SetValue(PLCamera.ExposureAuto.Continuous);

Auto Function ROI

The Auto Function ROI camera feature allows you to specify the part of the sensor array that you want to use to control the amera's auto functions. You can create several Auto Function ROIs, each occupying different parts of the sensor array. The settings for the Auto Function ROI feature are independent of the settings for the Image ROI feature.

Changing Position and Size of an Auto Function ROI

By default, all Auto Function ROIs are set to the full resolution of the camera's sensor. However, you can change their positions and sizes as required. To change the position and size of an Auto Function ROI:

  1. Set the AutoFunctionAOISelector parameter to one of the available Auto Function ROIs, e.g., AOI1.
  2. Enter values for the following parameters to specify the position of the Auto Function ROI selected:
    • AutoFunctionAOIoffsetX
    • AutoFunctionAOIOffsetY
  3. Enter values for the following parameters to specify the size of the Auto Function ROI selected:
    • AutoFunctionAOIWidth
    • AutoFunctionAOIHeight

The position of an Auto Function ROI is specified based on the lines and rows of the sensor array.

Example: Assume that you have selected Auto Function ROI 1 and specified the following settings:

  • AutoFunctionAOIOffsetX = 14
  • AutoFunctionAOIOffsetY = 7
  • AutoFunctionAOIWidth = 5
  • AutoFunctionAOIHeight = 6

This creates the following Auto Function ROI 1:

Only the pixel data from the area of overlap between the Auto Function ROI and the Image ROI will be used by the auto function assigned to it.

  • If the Binning feature is enabled, the Auto Function ROI settings refer to the binned lines and columns and not to the physical lines in the sensor.
  • If the Reverse X or Reverse Y feature or both are enabled, the position of the Auto Function ROI relative to the sensor remains the same. As a consequence, different regions of the image will be controlled depending on whether or not Reverse X, Reverse Y or both are enabled.

Assigning Auto Functions

By default, each Auto Function ROI is assigned to a specific auto function. For example, the pixel data from Auto Function ROI 2 is used to control the Balance White Auto auto function.

On some camera models, the default assignments can be changed. To do so:

  1. Set the AutoFunctionAOISelector parameter to one of the available Auto Function ROIs, e.g., AOI1.
  2. Set the AutoFunctionAOIUsageWhiteBalance parameter to true if you want to assign Balance White Auto to the Auto Function ROI selected.
  3. Set the AutoFunctionAOIUsageintensity parameter to true if you want to assign Exposure Auto and Gain Auto to the Auto Function ROI selected. (Exposure Auto and Gain Auto always work together.)
  • If you assign one auto function to multiple Auto Function ROIs, the pixel data from all selected Auto Function ROIs will be used for the auto function.
  • If you assign multiple auto functions to one Auto Function ROI, the pixel data from the Auto Function ROI will be used for all auto functions selected.

Exposure Auto and Gain Auto Assignments Work Together

When making Auto Function ROI assignments, the Gain Auto auto function and the Exposure Auto auto function always work together. They are considered as a single auto function named "Intensity" or "Brightness", depending on your camera model.

This does not imply, however, that Gain Auto and Exposure Auto must always be enabled at the same time.

Guidelines

When you are setting an Auto Function ROI, you must follow these guidelines:

Guideline Example
AutoFunctionAOIOffsetX + AutoFunctionAOIWidth ≤ Width of camera sensor

Camera with a 1920 x 1080 pixel sensor:

AutoFunctionAOIOffsetX + AutoFunctionAOIWidth ≤ 1920

AutoFunctionAOIOffsetY + AutoFunctionAOIHeight  ≤ Height of camera sensor

Camera with a 1920 x 1080 pixel sensor:

AutoFunctionAOIOffsetY + AutoFunctionAOIHeight ≤ 1080

Overlap Between Auto Function ROI and Image ROI

The size and position of an Auto Function ROI can be identical to the size and position of the Image ROI, but this is not a requirement. For an auto function to work, it is sufficient if both ROIs overlap each other partially.

The overlap between Auto Function ROI and Image ROI determines whether and to what extent the auto function will control the related image property. Only the pixel data from the areas of overlap will be used by the auto function to control the image property of the entire image.

  • If the Auto Function ROI is completely included in the Image ROI, the pixel data from the Auto Function ROI will be used to control the image property.

  • If the Image ROI is completely included in the Auto Function ROI, only the pixel data from the Image ROI will be used to control the image property.

  • If the Auto Function ROI overlaps the Image ROI only partially, only the pixel data from the area of partial overlap will be used to control the image property.

 

  • If the Auto Function ROI does not overlap the Image ROI, the related auto function will not work.

 

Basler strongly recommends completely including the Auto Function ROI within the Image ROI or choosing identical positions and sizes for Auto Function ROI and Image ROI.

Specifics

Camera Model Auto Function ROIs Default Assignments Assignments Can Be Changed
All ace GigE camera models

AOI 1

AOI 2

AOI 1: Intensity (Gain Auto + Exposure Auto)

AOI 2: White Balance (Balance White Auto)

Yes

// Select Auto Function AOI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Specify position and size of the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIOffsetX].SetValue(10);
camera.Parameters[PLCamera.AutoFunctionAOIOffsetY].SetValue(10);
camera.Parameters[PLCamera.AutoFunctionAOIWidth].SetValue(500);
camera.Parameters[PLCamera.AutoFunctionAOIHeight].SetValue(400);
// Enable Balance White Auto for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageWhiteBalance].SetValue(true);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
// Note: On some camera models, you must use AutoFunctionROIUseIntensity instead
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);

Binning

The Binning camera feature allows you to combine sensor pixel values into a single value. This may increase the signal-to-noise ratio or the camera's response to light. The camera must be idle, i.e., not capturing images.

On monochrome cameras, the camera combines (sums or averages) the pixel values of directly adjacent pixels:

On color cameras, the camera combines (sums or averages) the pixel values of adjacent pixels of the same color:

Specifying a Binning Factor

You can choose between horizontal and vertical binning. You can use both binning directions at the same time or configure only vertical or only horizontal binning.

  • With horizontal binning, adjacent pixels from a certain number of columns in the image sensor are combined.
  • With vertical binning, adjacent pixels from a certain number of rows in the image sensor are combined.

To specify a horizontal binning factor, enter a value for the BinningHorizontal parameter. To specify the vertical binning factor, enter a value for the BinningVertical parameter. The value of the parameters defines the binning factor. Depending on your camera model, the following values are available:

  • 1: disables binning.
  • 2, 3, 4: Specifies the number of binned columns or rows (2, 3, or 4).

For example, entering a value of 3 for BinningHorizontal enables horizontal binning by 3. You can use horizontal and vertical binning at the same time. However, if you use different binning factors, objects will appear distorted in the image.

Choosing a Binning Mode

To select the binning mode for horizontal binning, set the BinningHorizontalMode parameter. To select the binning mode for vertical binning, set the BinningVerticalMode parameter. The binning mode defines how pixels are combined when binning is enabled. Depending on your camera model, the following binning modes are available:

  • Sum: The values of the affected pixels are summed. This improves the signal-to-noise ratio, but also increases the camera’s response to light.
  • Average: The values of the affected pixels are averaged. This greatly improves the signal-to-noise ratio without affecting the camera’s response to light.

Both modes reduce the amount of image data to be transferred. This may increase the camera's frame rate.

Considerations When Using Binning

  • Effect on ROI Settings

When you are using binning, the settings for your Image ROIs and Auto Function ROIs refer to the binned rows and columns. For example, assume that you are using a camera with a 1280 x 960 sensor. Horizontal binning by 2 and vertical binning by 2 are enabled. In this case, the maximum ROI width is 640 and the maximum ROI height is 480.

  • Increased Response to Light

Using binning with the binning mode set to Sum can significantly increase the camera’s response to light. When pixel values are summed, the acquired images may look overexposed. If this is the case, you can reduce the lens aperture, the intensity of your illumination, the camera’s Exposure Time setting, or the camera’s Gain setting.

  • Reduced Resolution

Using binning effectively reduces the resolution of the camera’s imaging sensor. For example, if you enable horizontal binning by 2 and vertical binning by 2 on a camera with a 1280 x 960 sensor, the effective resolution of the sensor is reduced to 640 x 480.

  • Possible Image Distortion

Objects will only appear undistorted in the image if the numbers of binned lines and columns are equal. With all other combinations, objects will appear distorted. For example, if you combine vertical binning by 2 with horizontal binning by 4, the target objects will appear squashed.

Binning Factors

Camera Model Horizontal Binning Factors Vertical Binning Factors Allowed Combinations (H x V Binning)
acA2500-20gm 1, 2, 3, 4 1, 2, 3, 4 All combinations

Binning Modes

Camera Model

Horizontal Binning Modes

Vertical Binning Modes

Allowed Combinations (H x V Binning Mode)

acA2500-20gm Average 

Sum

Average 

Sum

All combinations
// Enable horizontal binning by 4
camera.Parameters[PLCamera.BinningHorizontal].SetValue(4);
// Enable vertical binning by 2
camera.Parameters[PLCamera.BinningVertical].SetValue(2);
// Set the horizontal binning mode to Average
camera.Parameters[PLCamera.BinningHorizontalMode].SetValue(PLCamera.BinningHorizontalMode.Average);
// Set the vertical binning mode to Sum
camera.Parameters[PLCamera.BinningVerticalMode].SetValue(PLCamera.BinningVerticalMode.Sum);

Black Level

The Black Level camera feature allows you to change the overall brightness of an image by changing the gray values of the pixels by a specified amount. For example, if you set a black level that results in a gray value increase of 3, the gray value of each pixel in the image is increased by 3. A = B + 3  To adjust the black level, enter a value for the BlackLevel parameter. The minimum black level setting is 0. 

Camera Model Maximum Black Level [DN]
acA2500-20gm 255

Black Level Effect

Camera Model Change in BlackLevel Parameter Value Resulting Change in Gray Value
acA2500-20gm 8-bit pixel format: +/- 4

10-bit pixel format: +/- 1

12-bit pixel format: +/- 1

+/- 1
// Set the black level to 32
camera.Parameters[PLCamera.BlackLevelRaw].SetValue(32);

Center X and Center Y

Enabling Center X

To enable Center X, set the CenterX parameter to true. The camera automatically adjusts the OffsetX parameter value to center the Image ROI horizontally. The OffsetX parameter becomes read-only.

Enabling Center Y

To enable Center Y, set the CenterY parameter to true. The camera automatically adjusts the OffsetY parameter value to center the Image ROI vertically. The OffsetY parameter becomes read-only.

  • You can use Center X and Center Y at the same time.
  • When you enable Center X or Center Y, the camera doesn't save the current OffsetX and OffsetY parameter values. To restore the original settings, you must adjust the OffsetX and OffsetY parameters manually.
// Enable Center X
camera.Parameters[PLCamera.CenterX].SetValue(true);
// Enable Center Y
camera.Parameters[PLCamera.CenterY].SetValue(true);

Counter

The Counter camera feature allows you to count certain camera events, e.g., the number of images acquired. You can get the current value of a counter by retrieving the related data chunk. If your camera supports the Counter feature, multiple counters are available. With one exception (see below), every counter has the following characteristics:

  • It starts at 0.
  • It counts a specific type of event (the "event source"). For example, it counts the number of images acquired. The event source is preset and can't be changed.
  • Its current value can be determined by retrieving the related data chunk, e.g., the Frame Counter chunk.
  • Its maximum value is 4 294 967 295. After reaching the maximum value, the counter is reset to 0 and then continues counting.
  • It can be reset manually.
  • It is reset to 0 whenever the camera is powered off and on again.

Exception: On some camera models, Counter 2 can be used to control the sequencer. This counter has different characteristics due to its specific purpose.

Getting the Value of a Counter

To get the current value of a counter, retrieve the related data chunk using the Data Chunks feature.

Resetting a Counter

To reset a counter:

  1. Set the CounterSelector parameter to the desired counter, e.g., Counter2.
  2. Set the CounterResetSource parameter to a software source (Software) or to a hardware source (e.g., Line1).
  3. Depending on the source selected in step 2, do one of the following:
    1. If you set a software source, execute the CounterReset command.
    2. If you set a hardware source, apply an electrical signal to one of the camera's input lines.

Additional Parameters

  • The CounterEventSource parameter allows you to get the event source of the currently selected counter, i.e., determine which type of event increases the counter.
  • The CounterResetActivation parameter currently serves no function. It is preset to RisingEdge. This means that if the counter is configured for hardware reset, the counter resets when the hardware trigger signal rises.
Camera Model Counter Name Function Event Source Related Data Chunk Can Be Reset
All ace GigE camera models Counter 1 Counts number of hardware frame start trigger signals received, regardless of whether they cause image acquisitions or not Frame Trigger Trigger Input Counter Chunk Yes
Counter 2 Counts number of acquired images Frame Start Frame Counter Chunk Yes
// Reset Counter 1 via software command
camera.Parameters[PLCamera.CounterSelector].SetValue(PLCamera.CounterSelector.Counter1);
camera.Parameters[PLCamera.CounterResetSource].SetValue(PLCamera.CounterResetSource.Software);
camera.Parameters[PLCamera.CounterReset].Execute();
// Get the event source of Counter 1
camera.Parameters[PLCamera.CounterSelector].SetValue(PLCamera.CounterSelector.Counter1);
string e = camera.Parameters[PLCamera.CounterEventSource].GetValue();

Data Chunks

Data chunks allow you to add supplementary information to inpidual image acquisitions. The desired supplementary information is generated and appended as data chunks to the image data. Image data is also considered a "chunk". This "image data chunk" can't be disabled and is always the first chunk transmitted by the camera. If one or more data chunks are enabled, these chunks are transmitted as chunk 2, 3, and so on.

The figure below shows a set of chunks with the leading image data chunk and appended data chunks. The example assumes that the CRC checksum chunk feature is enabled.

After data chunks have been transmitted to the computer, they must be retrieved to obtain their information. The exact procedure depends on your camera model and the programming language used for your application. For more information about retrieving data chunks, see the Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.

Additional Metadata

Besides the data chunks, the camera adds additional metadata to inpidual images, e.g., the image height, image width, the Image ROI offset, and the pixel format used. This information can be retrieved by accessing the grab result data via the pylon API.

If all of the following conditions are met, the grab result data doesn't contain any useful information (image height, image width, etc. will be set to -1):

  • You are using a Basler ace classic GigE camera.
  • You are using the pylon C API, the pylon C. NET API, or the pylon C++ low level API.
  • The ChunkModeActive parameter is set to true.

In this case, you must retrieve the additional metadata using the pylon chunk parser. For more information, see the code samples in the Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.

Enabling and Retrieving Data Chunks

  1. Set the ChunkModeActive parameter to true.
  2. Set the ChunkSelector parameter to the kind of chunk that you want to enable:
    • GainAll
    • ExposureTime
    • Timestamp
    • LineStatusAll
    • Triggerinputcounter (if available)
    • CounterValue (if available)
    • Framecounter (if available)
    • SequenceSetIndex or SequencerSetActive (if available)
    • PayloadCRC16
  3. Enable the selected chunk by setting the ChunkEnable parameter to true.
  4. Repeat steps 2 and 3 for every desired chunk.
  5. Implement chunk retrieval in your application.

    For information about implementing chunk retrieval, see the Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.

Data chunks can also be viewed in the pylon Viewer.

Available Data Chunks

Gain Chunk (= GainAll Chunk)

If this chunk is available and enabled, the camera appends the gain used for image acquisition to every image. The data chunk includes the GainRaw parameter value.

Exposure Time Chunk

If this chunk is enabled, the camera appends the exposure time used for image acquisition to every image. The data chunk includes the ExposureTimeAbs parameter value. When using the Trigger Width exposure mode, the Exposure Time chunk feature is not available.

Timestamp Chunk

If this chunk is enabled, the camera appends the internal timestamp (in ticks) of the moment when image acquisition was triggered to every image.

Line Status All Chunk

If this chunk is enabled, the camera appends the status of all I/O lines at the moment when image acquisition was triggered to every image.

The data chunk includes the LineStatusAll parameter value.

Trigger Input Counter Chunk

If this chunk is available and enabled, the camera appends the number of hardware frame start trigger signals received to every image.

To do so, the camera retrieves the current value of the Counter 1 counter. On cameras with the Trigger Input Counter chunk, Counter 1 counts the number of hardware trigger signals received.

To manually reset the counter, reset Counter 1.

The trigger input counter only counts hardware trigger signals. If the camera is configured for software triggering or free run, the counter value will not increase.

Counter Value Chunk

If this chunk is available and enabled, the camera appends the number of acquired images to every image.

To do so, the camera retrieves the current value of the Counter 1 counter. On cameras with the Counter Value chunk, Counter 1 counts the number of acquired images.

To manually reset the counter, reset Counter 1.

Frame Counter Chunk

If this chunk is available and enabled, the camera appends the number of acquired images to every image.

To do so, the camera retrieves the current value of the Counter 2 counter. On cameras with the Frame Counter chunk, Counter 2 counts the number of acquired images.

To manually reset the counter, reset Counter 2.

Numbers in the counting sequence may be skipped when the acquisition mode is changed from Continuous to Single Frame. Numbers may also be skipped when overtriggering occurs.

Sequencer Set Active Chunk (= Sequence Set Index Chunk)

If this chunk is available and enabled, the camera appends the sequencer set used for image acquisition to every image.

The data chunk includes the SequencerSetActive or SequenceSetIndex parameter value (depending on your camera model).

Enabling this chunk is only useful if the camera's Sequencer feature is used for image acquisition.

CRC Checksum Chunk Feature

If this chunk is enabled, the camera appends a CRC (Cyclic Redundancy Check) checksum to every image.

The checksum is calculated using the X-modem method and includes the image data and all appended chunks, if any, except for the CRC chunk itself.

The CRC checksum chunk is always the last chunk appended to image data.

Specifics

Camera Model Available Data Chunks

All ace GigE camera models

  • Gain All Chunk
  • Exposure Time Chunk
  • Timestamp Chunk
  • Line Status All Chunk
  • Trigger Input Counter
  • Frame Counter
  • Sequence Set Index Chunk
  • CRC Checksum Chunk
// Enable data chunks
camera.Parameters[PLCamera.ChunkModeActive].SetValue(true);
// Select and enable Gain All chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.GainAll);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Exposure Time chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.ExposureTime);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Timestamp chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.Timestamp);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Line Status All chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.LineStatusAll);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Trigger Input Counter chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.TriggerInputCounter);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Frame Counter chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.FrameCounter);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Sequence Set Index chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.SequenceSetIndex);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable CRC checksum chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.PayloadCRC16);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);

Device Information Parameters

Standard Device Information Parameters

All Basler cameras mentioned in this documentation provide the following device information parameters:

Parameter Name

Access 

(R = read-only, 

RW = read / write)

Description

DeviceVendorName

R

The camera's vendor name, e.g., Basler.

DeviceModelName

R

The camera’s model name, e.g., acA3800-14um.

DeviceManufacturerInfo

R

The camera's manufacturer name. Usually contains an empty string.

DeviceVersion

R

The camera's version number.

DeviceFirmwareVersion

R

The camera's firmware version number.

deviceid

R

The camera's serial number.

DeviceUserID

RW

Used to assign a user-defined name to a camera. The name is displayed in the Basler pylon Viewer and the Basler pylon USB Configurator. The name is also visible in the "friendly name" field of the device information objects returned by pylon’s device enumeration procedure.

DeviceScanType

R

The scan type of the camera's sensor (Areascan or Linescan).

SensorWidth

R

The actual width of the camera's sensor in pixels.

SensorHeight

R

The actual height of the camera's sensor in pixels.

WidthMax

R

The maximum allowed width of the Image ROI in pixels. The value adapts to the current settings for Binning, Decimation, or Scaling (if available).

HeightMax

R

The maximum allowed height of the Image ROI in pixels. The value adapts to the current settings for Binning, Decimation, or Scaling (if available).

Additional Device Information Parameters

Depending on your camera model, the following additional device information parameters are available:

Parameter Name

Access 

(R = read-only, 

RW = read / write)

Description

DeviceSFNCVersionMajor

R

If available, the major version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "2" for SFNC 2.3.1.

DeviceSFNCVersionMinor

R

If available, the minor version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "3" for SFNC 2.3.1.

DeviceSFNCVersionSubMinor

R

If available, the subminor version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "1" for SFNC 2.3.1.

DeviceLinkSelector

RW

If available, allows you to select the link for data transmission. The parameter is preset to 0. Do not change this parameter.
DeviceLinkSpeed

R

If available, the bandwidth negotiated on the specified link in bytes per second.
DeviceLinkThroughputLimitMode

RW

If available, allows you to limit the maximum available bandwidth for data transmission. To enable the limit, set the parameter to On. The bandwidth is limited to the DeviceLinkThroughputLimit parameter value.
DeviceLinkThroughputLimit

RW

If available, specifies the maximum available bandwidth for data transmission in bytes per second. To enable the limit, set the DeviceLinkThroughputLimitMode to On.
DeviceLinkCurrentThroughput

R

If available, the actual bandwidth currently used for data transmission in bytes per second.
DeviceIndicatorMode

RW

If available, allows you to turn the camera's status LED on or off. To turn the status LED on, set the parameter to Active. To turn the status LED off, set the parameter to Inactive.
// Example: Getting some of the camera's device information parameters
// Get the camera's vendor name
string s = camera.Parameters[PLCamera.DeviceVendorName].GetValue();
// Get the camera's firmware version
s = camera.Parameters[PLCamera.DeviceFirmwareVersion].GetValue();
// Get the camera's model name
s = camera.Parameters[PLCamera.DeviceModelName].GetValue();
// Get the width of the camera's sensor
Int64 i = camera.Parameters[PLCamera.SensorWidth].GetValue();

ERROR Codes

The camera can detect errors that you can correct yourself. If such an error occurs, the camera assigns an error code to this error and stores the error code in memory. After you have corrected the error, you can clear the error code from the list.

If several different errors have occurred, the camera stores the code for each type of error detected. The camera stores each code only once regardless of how many times it has detected the corresponding error.

Checking and Clearing Error Codes

Checking and clearing error codes is an iterative process, depending on how many errors have occurred.

  1. To check the last error code in the memory, get the value of the LastErrorparameter.
  2. Correct the corresponding error.
  3. To delete the last error code from the list of error codes, execute the ClearLastError command.
  4. Continue getting and deleting the last error code until the LastError parameter shows NoError.

Available Error Codes

Error Code Value Meaning
0 No Error The camera hasn't detected any errors since the last time the error memory was cleared.
1 Overtrigger An overtrigger has occurred.
  • The user has applied an Acquisition Start trigger to the camera when the camera was not in a waiting for acquisition start condition.
Or:
  • The user has applied a Frame Start trigger to the camera when the camera was not in a waiting for frame start condition.
2 Userset an error occurred when attempting to load a user set. Typically, this means that the user set contains an invalid value. Try loading a different user set.
3 Invalid Parameter A parameter has been entered that is out of range or otherwise invalid. Typically, this error only occurs when the user sets parameters via direct register access.

4

Over Temperature

The camera is in the over temperature mode.

This error indicates that an over temperature condition exists and that damage to camera components may occur.

5 Power Failure This error indicates that the power supply is not sufficient.

Check the power supply.

6 Insufficient Trigger Width This error is reported in Trigger Width exposure mode, when a trigger is shorter than the minimum exposure time.

Specifics

Camera Model Available Error Codes
acA2500-20gm 1, 2, 3, 4, 5, 6
// Get the value of the last error code in the memory
string lasterror = camera.Parameters[PLCamera.LastError].GetValue();
// Clear the value of the last error code in the memory
camera.Parameters[PLCamera.ClearLastError].Execute();

Event notification

Enabling Event Notification

  1. Set the EventSelector parameter to one of the following values:
    • FrameStart
    • FrameStartOvertrigger
    • FrameStartWait
    • AcquisitionStart
    • AcquisitionStartOvertrigger
    • AcquisitionStartWait
    • ExposureEnd
    • EventOverrun (if available)
    • criticalTemperature (if available)
    • OverTemperature (if available)
    • ActionLate (if available)
  2. Set the EventNotification parameter to On.
  3. Repeat steps 1 and 2 for all types of event notifications that you want to enable.
  4. Implement event handling in your application:
    • For a C++ sample implementation, see the "Grab_CameraEvents" and "Grab_CameraEvents_Usb" code samples in the C++ Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.
    • For a C and C .NET sample implementation, see the "Events Sample" code sample in the C Programmer's Guide and Reference Documentation and the pylon C .NET Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.
  • Event messages are sent to the computer if there is sufficient bandwidth available. When the camera operates at high frame rates, event messages may be lost. There is no mechanism to monitor the number of event messages lost.
  • After the camera has sent an event message, it waits for an acknowledgement. If no acknowledgement is received within a specified time frame, the camera resends the event message. If an acknowledgement is still not received, the resend mechanism repeats until the maximum number of retries is reached. If this maximum number of retries is reached, the message is dropped. while the camera is waiting for an acknowledgement, no new event messages can be transmitted.
  • An event message is only useful when its cause still exists at the time when the event is received by the computer.

Available Events

Frame Start Event

The Frame Start event occurs whenever a Frame Start trigger has been generated by the camera (free run) or applied externally (triggered image acquisition).

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, the number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Frame Start Overtrigger Event

The Frame Start Overtrigger event occurs whenever the Frame Start trigger has been overtriggered. This happens if you apply a Frame Start trigger signal when the camera is not ready to receive the signal.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, the number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Frame Start Wait Event

The Frame Start Wait event occurs whenever the camera is ready to receive a Frame Start trigger signal.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, the number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Frame Burst Start (= Acquisition Start) Event

The Frame Burst Start event and the Acquisition Start event are identical, only their names differ. The naming depends on your camera model.

In the following, the term "Frame Burst Start event" refers to both.

The Frame Burst Start event occurs whenever a Frame Burst Start trigger has been generated by the camera (free run) or applied externally (triggered image acquisition).

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, the number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Frame Burst Start Overtrigger (= Acquisition Start Overtrigger) Event

The Frame Burst Start Overtrigger event and the Acquisition Start Overtrigger event are identical, only their names differ. The naming depends on your camera model.

In the following, the term "Frame Burst Start event" refers to both.

The Frame Burst Start Overtrigger event occurs whenever the Frame Burst Start trigger has been overtriggered. This happens if you apply a Frame Burst Start trigger signal when the camera is not ready to receive the signal.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, the number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Frame Burst Start Wait (= Acquisition Start Wait) Event

The Frame Burst Start Wait event and the Acquisition Start Wait event are identical, only their names differ. The naming depends on your camera model.

In the following, the term "Frame Burst Start event" refers to both.

The Frame Burst Start Wait event occurs whenever the camera is ready to receive a Frame Burst Start trigger signal.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Exposure End Event

The Exposure End event occurs whenever an image has been exposed.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Frame ID: Number of the image that has been exposed.
  • Stream Channel Index: If available, number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Event Overrun Event

If available, the Event Overrun event occurs if the camera's internal event queue has overrun. This happens if events are generated at a very high frequency and there isn't enough bandwidth available to send the events.

The event overrun event is a warning that events are being dropped. The notification contains no specific information about how many or which events have been dropped.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

The names of the parameters containing the information vary by camera model.

Critical Temperature Event

If available, the Critical Temperature event occurs if the camera’s temperature state has reached a critical level.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated. The name of the timestamp parameter depends on your camera model.

Over Temperature Event

If available, the Over Temperature event occurs if the camera’s temperature state has reached the over temperature level.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated. The name of the timestamp parameter depends on your camera model.

Action Late Event

If available, the Action Late event occurs if the camera receives a scheduled action command with a timestamp in the past.

When this event occurs, the corresponding message contains the following information:

  • Timestamp: Time when the event was generated.
  • Stream Channel Index: If available, number of the image data stream used to transfer the image. On Basler cameras, this parameter is always set to 0.

Specifics

Camera Model Events Available Event Parameters Available
acA2500-20gm
  • Frame Start
  • Frame Start Overtrigger
  • Acquisition Start
  • Acquisition Start Overtrigger
  • Exposure End
  • Critical Temperature
  • Over Temperature
  • Action Late
  • FrameStartEventStreamChannelIndex
  • FrameStartEventTimestamp
  • FrameStartOvertriggereventStreamChannelIndex
  • FrameStartOvertriggerEventTimestamp
  • AcquisitionStartEventStreamChannelIndex
  • AcquisitionStartEventTimestamp
  • AcquisitionStartOvertriggerEventStreamChannelIndex
  • AcquisitionStartOvertriggerEventTimestamp
  • ExposureEndEventFrameID
  • ExposureEndEventStreamChannelIndex
  • ExposureEndEventTimestamp
  • EventCriticalTemperatureEventTimestamp
  • EventOverTemperatureEventTimestamp
  • ActionLateEventTimestamp
  • ActionLateEventStreamChannelIndex
// Enable the Exposure End event notification
camera.Parameters[PLCamera.EventSelector].SetValue(PLCamera.EventSelector.ExposureEnd);
camera.Parameters[PLCamera.EventNotification].SetValue(PLCamera.EventNotification.On);
// Enable the Critical Temperature event notification
camera.Parameters[PLCamera.EventSelector].SetValue(PLCamera.EventSelector.CriticalTemperature);
camera.Parameters[PLCamera.EventNotification].SetValue(PLCamera.EventNotification.On);
// Now, you must implement event handling in your application.
// For a C++ sample implementation, see the "Grab_CameraEvents" and "Grab_CameraEvents_Usb"
// code samples in the C++ Programmer's Guide and Reference Documentation delivered
// with the Basler pylon Camera Software Suite.
// For a C and C .NET sample implementation, see the "Events Sample" code sample in
// the C Programmer's Guide and Reference Documentation and the pylon C .NET Programmer's
// Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.

Exposure Auto

Prerequisites

  • If the camera is configured for hardware triggering, the ExposureMode parameter must be set to Timed.
  • At least one Auto Function ROI must be assigned to the Exposure Auto auto function.
  • The Auto Function ROI assigned must overlap the Image ROI, either partially or completely.

Enabling or Disabling Exposure Auto

To enable or disable the Exposure Auto auto function, set the ExposureAuto parameter to one of the following operating modes:

  • Once: The camera adjusts the exposure time until the specified target brightness value has been reached. When this has been achieved, or after a maximum of 30 calculation cycles, the camera sets the auto function to Off. To all following images, the camera applies the exposure time resulting from the last calculation.
  • Continuous: The camera adjusts the exposure time continuously while images are acquired. The adjustment process continues until the operating mode is set to Once or Off.
  • Off: Disables the Exposure Auto auto function. The exposure time is set to the value resulting from the last automatic or manual adjustment.

When the camera is capturing images continuously, the auto function takes effect with a short delay. The first few images may not be affected by the auto function.

Specifying Lower and Upper Limits

The auto function adjusts the ExposureTimeAbs parameter value within limits specified by you.

To change the limits, set the AutoExposureTimeAbsLowerLimit and the AutoExposureTimeAbsUpperLimit parameters to the desired values (in µs).

Example: Assume you have set the AutoExposureTimeAbsLowerLimit parameter to 1000 and the AutoExposureTimeAbsUpperLimit parameter to 5000. During the automatic adjustment process, the exposure time will never be lower than 1000 µs and never higher than 5000 µs.

If the AutoExposureTimeAbsUpperLimit parameter is set to a high value, the camera’s frame rate may decrease.

Specifying the Target Brightness Value

The auto function adjusts the exposure time until a target brightness value, i.e., an average gray value, has been reached.

To specify the target value, use the AutoTargetValue parameter. The parameter's value range depends on the camera model and the pixel format used.

  • The target value calculation does not include other image optimizations, e.g. Gamma. Depending on the image optimizations set, images output by the camera may have a significantly lower or higher average gray value than indicated by the target value.
  • The camera also uses the AutoTargetValue parameter to control the Gain Auto auto function. If you want to use Exposure Auto and Gain Auto at the same time, use the Auto Function Profile feature to specify how the effects of both are balanced.

On Basler ace GigE camera models, you can also specify a Gray Value Adjustment Damping factor. On Basler dart and pulse camera models, you can specify a Brightness Adjustment Damping factor.

When a damping factor is used, the target value is reached more slowly.

Specifics

On some camera models, you can use the Remove Parameter Limitsfeature to increase the target value parameter limits.

Camera Model

Minimum Target Value

Maximum Target Value

All ace U GigE camera models

50 / 800a

205 / 3280a

// Set the Exposure Auto auto function to its minimum lower limit
// and its maximum upper limit
double minLowerLimit = camera.Parameters[PLCamera.AutoExposureTimeAbsLowerLimit].GetMinimum();
double maxUpperLimit = camera.Parameters[PLCamera.AutoExposureTimeAbsUpperLimit].GetMaximum();
camera.Parameters[PLCamera.AutoExposureTimeAbsLowerLimit].SetValue(minLowerLimit);
camera.Parameters[PLCamera.AutoExposureTimeAbsUpperLimit].SetValue(maxUpperLimit);
// Set the target brightness value to 128
camera.Parameters[PLCamera.AutoTargetValue].SetValue(128);
// Select Auto Function ROI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);
// Enable Exposure Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.ExposureAuto].SetValue(PLCamera.ExposureAuto.Continuous);

Exposure Time

The Exposure Time camera feature specifies how long the image sensor is exposed to light during image acquisition.

To automatically set the exposure time, use the Exposure Auto feature.

Prerequisites

  • If the camera is configured for hardware triggering, the ExposureMode parameter must be set to Timed. Otherwise, the ExposureTimeAbs parameter is not available.
  • The Exposure Auto auto function must be set to Off. Otherwise, setting the exposure time has no effect.

Setting the Exposure Time

To set the exposure time in microseconds, use the ExposureTimeAbs parameter.

The minimum exposure time, the maximum exposure time, and the increments in which the parameter can be changed vary by camera model.

Determining the Exposure Time

To determine the current exposure time in microseconds, get the value of the ExposureTimeAbs parameter.

This can be useful, for example, if the Exposure Auto auto function is enabled and you want to retrieve the automatically adjusted exposure time.

Exposure Time Mode

Depending on your camera model, the ExposureTimeMode parameter is available. It allows you to choose between the Standard and the Ultra Short exposure time mode. Using the Ultra Short exposure time mode lowers the value range of the ExposureTimeAbs parameter. It allows you to set very short exposure times.

  • The ExposureTimeMode parameter can only be used if the prerequisites listed above are met.
  • Depending on the exposure time mode, the exposure start delaychanges.
  • If the Ultra Short exposure time mode is enabled, the Sequencer feature is not available.

You can set the ExposureTimeMode parameter to one of the following values:

  • Standard: Enables the Standard exposure time mode. This is the default setting. When you enable this mode, the exposure time is set to the minimum value available in this exposure time mode.
  • UltraShort: Allows you to set an ultra short exposure time within the value range available. When you enable this mode, the exposure time is set to the maximum value available in this exposure time mode.

Specifics

On some camera models, you can use the Remove Parameter Limitsfeature to increase the exposure time parameter limits.

Camera Model

Minimum Exposure Time [μs]

Maximum Exposure Time [μs]

Increment [μs]

ExposureTimeMode Parameter Available

acA2500-20gm 137 1000000 1 No
// Determine the current exposure time
double d = camera.Parameters[PLCamera.ExposureTimeAbs].GetValue();
// Set the exposure time mode to Standard
// Note: Available on selected camera models only
camera.Parameters[PLCamera.ExposureTimeMode].SetValue(PLCamera.ExposureTimeMode.Standard);
// Set the exposure time to 3500 microseconds
camera.Parameters[PLCamera.ExposureTimeAbs].SetValue(3500.0);

Exposure Mode

The Exposure Mode camera feature allows you to choose a method for determining the length of exposure when the camera is configured for hardware triggering.

The resulting camera behavior also depends on the Trigger Activation setting.

To set the exposure mode:

  1. Set the TriggerSelector parameter to FrameStart.
  2. Set the TriggerMode parameter to On.
  3. Set the TriggerSource parameter to one of the available hardware trigger sources, e.g., Line1.
  4. Set the ExposureMode parameter to one of the following values:
    1. Timed
    2. TriggerWidth (if available)

Available Exposure Modes

Timed Exposure Mode

Timed exposure mode is available on all camera models.

In this mode, the length of exposure is determined by the value of the camera’s Exposure Time setting.

If the camera is configured for software triggering, exposure starts when the software trigger signal is received and continues until the exposure time has expired.

If the camera is configured for hardware triggering, the following applies:

  • If rising edge triggering is enabled, exposure starts when the trigger signal rises and continues until the exposure time has expired.

  • If falling edge triggering is enabled, exposure starts when the trigger signal falls and continues until the exposure time has expired.

 

Avoiding Overtriggering in Timed Exposure Mode

If the Timed exposure mode is enabled, do not attempt to trigger a new exposure start while the previous exposure is still in progress. Otherwise, the trigger signal will be ignored, and a Frame Start Overtrigger event will be generated.

This scenario is illustrated below for rising edge triggering.

Trigger Width Exposure Mode

Trigger Width exposure mode is available on some camera models.

In this mode, the length of exposure is determined by the width of the hardware triggersignal. This is useful if you intend to vary the length of exposure for each captured frame.

If the camera is configured for rising edge triggering, exposure starts when the trigger signal rises and continues until the trigger signal falls:

If the camera is configured for falling edge triggering, exposure starts when the trigger signal falls and continues until the trigger signal rises:

Exposure Time Offset

On some camera models, when using the Trigger Width exposure mode, the exposure is slightly longer than the width of the trigger signal. This is because an exposure time offset is added automatically to the time determined by the width of the trigger signal.

To achieve the desired exposure time in Trigger Width exposure mode, you must compensate for the exposure time offset. To do so:

  1. Subtract the exposure time offset from the desired exposure time.
  2. Use the resulting time as the high or low time for the trigger signal.

Example: To achieve an exposure time of 3000 µs and the exposure time offset is 64 µs, use 3000 - 64 = 2936 µs as the high or low time for the trigger signal.

Avoiding Overtriggering in Trigger Width Exposure Mode

If the Trigger Width exposure mode is enabled, do not send trigger signals at too high a rate. Otherwise, trigger signals will be ignored, and Frame Start Overtrigger events will be generated.

You can avoid overtriggering in Trigger Width exposure mode by doing the following:

  • Monitor the camera’s Frame Trigger Wait signal and only apply a Frame Start trigger signal when the Frame Trigger Wait signal is high.
  • If the Exposure Overlap Time Max parameter is available, set it to the smallest exposure time you intend to use.

Specifics

Camera Model

Available Exposure Modes

Exposure Time Offset [µs]

acA2500-20gm Timed

Trigger Width

Not specified
// Select and enable the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the trigger source to Line 1
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Line1);
// Enable Timed exposure mode
camera.Parameters[PLCamera.ExposureMode].SetValue(PLCamera.ExposureMode.Timed);

Exposure Overlap Time Max

The Exposure Overlap Time Max camera feature allows you to optimize overlapping image acquisition.

Using this parameter is especially useful if you want to maximize the camera's frame rate, i.e., if you want to trigger at the highest rate possible.

The parameter is only available if you operate the camera in Trigger Width exposure mode.

Prerequisites

  • The trigger mode of the camera's Frame Start trigger must be set to On.
  • The ExposureMode parameter must be set to TriggerWidth.
  • If the ExposureOverlapTimeMode parameter is available, the parameter must be set to Manual.

How It Works

You can use overlapping image acquisition to increase the camera's frame rate. With overlapping image acquisition, the exposure of a new image begins while the camera is still reading out the sensor data of the previous image.

In Trigger Width exposure mode, the camera doesn't "know" how long the image will be exposed before the trigger period is complete. Because of that, the camera can't fully optimize overlapping image acquisition.

To avoid this problem, enter a value for the ExposureOverlapTimeMaxAbs parameter that represents the shortest exposure time you intend to use (in µs). This helps the camera to optimize overlapping image acquisition.

If you have entered a value for the ExposureOverlapTimeMaxAbsparameter, make sure to never apply a trigger signal that is shorter than the given parameter value.

Setting the Exposure Overlap Time Max

To optimize the camera's frame rate in Trigger Width exposure mode, enter a value for the ExposureOverlapTimeMaxAbs parameter that represents the shortest exposure time you intend to use (in µs).

Example: Assume that you want to use the Trigger Width exposure mode to apply exposure times in a range from 3000 μs to 5500 μs. In this case, set the camera’s ExposureOverlapTimeMaxAbs parameter to 3000.

Additional Parameters

On some camera models, the ExposureOverlapTimeMode parameter is available.

If the parameter is available, you can set it to one of the following values:

  • Automatic: The value of the ExposureOverlapTimeMaxAbs parameter is set to the maximum possible value and can't be modified. This is the default setting.
  • Manual: You can configure the ExposureOverlapTimeMaxAbs parameter as desired.

If the parameter is not available, the camera always operates in the "Manual" mode.

Specifics

Camera Model

ExposureOverlapTimeMode Parameter Available

acA2500-20gm No
// Set the maximum overlap time between sensor
// exposure and sensor readout to 10000 microseconds
camera.Parameters[PLCamera.ExposureOverlapTimeMaxAbs].SetValue(10000.0);

Gamma

Prerequisites

  • For best results, set the black level to 0 (zero) before you adjust gamma.
  • If the GammaEnable parameter is available, it must be set to true.

How It Works

The camera applies a gamma correction value (γ) to the brightness value of each pixel according to the following formula (red pixel value (R) of a color camera shown as an example):

The maximum pixel value (Rmax) equals 255 for 8-bit pixel formats or 1 023 for 10-bit pixel formats.

Enabling Gamma Correction

To enable gamma correction, use the Gamma parameter. The parameter's value range is 0 to ≈4.

  • Gamma = 1: The overall brightness remains unchanged.
  • Gamma < 1: The overall brightness increases.
  • Gamma > 1: The overall brightness decreases.

In all cases, black pixels (brightness = 0) and white pixels (brightness = maximum) will not be adjusted.

If you enable gamma correction and the pixel format is set to a 12-bit pixel format, some image information will be lost. Pixel data output will still be 12-bit, but the pixel values will be interpolated during the gamma correction process. Basler does not recommend using the Gamma feature with 12-bit pixel formats.

Additional Parameters

Depending on your camera model, the following additional parameters are available:

  • GammaEnable: Enables or disables gamma correction.
  • GammaSelector: Allows you to select one of the following gamma correction modes:
    • User: The gamma correction value can be set as desired. (Default.)
    • sRGB: The camera automatically sets a gamma correction value of approximately 0.4. This value is optimized for image display on sRGB monitors.
  • BslColorSpaceMode: Allows you to select one of the following gamma correction modes:
    • sRGB: The image brightness is optimized for display on an sRGB monitor. An additional gamma correction value of approximately 0.4 is applied. (Default.) 

      The sRGB gamma correction value is applied separately and will not be included in the Gamma parameter value. Example: You have set the color space mode to sRGB and the Gamma parameter value to 1.2. First, an automatic correction value of approximately 0.4 is applied to the pixel values. After that, a gamma correction value of 1.2 is applied to the resulting pixel values.

    • RGB: No additional gamma correction value is applied.
Camera Model

Additional Parameters

All ace GigE camera models

  • GammaEnable

  • GammaSelector

// Enable the Gamma feature
camera.Parameters[PLCamera.GammaEnable].SetValue(true);
// Set the gamma type to User
camera.Parameters[PLCamera.GammaSelector].SetValue(PLCamera.GammaSelector.User);
// Set the Gamma value to 1.2
camera.Parameters[PLCamera.Gamma].SetValue(1.2);

Gain

The Gain camera feature allows you to increase the brightness of the images output by the camera. Increasing the gain increases all pixel values of the image.To adjust the gain value automatically, use the Gain Auto feature.

Prerequisites

  • The Gain Auto auto function must be set to Off. Otherwise, setting the gain has no effect.

Configuring Gain Settings

  1. If the camera's gain control is user-settable, set the GainSelector to one of the following values:
    • AnalogAll: Selects the analog gain control.
    • digitalAll: Selects the digital gain control.
    Otherwise, you don't need to set the GainSelector parameter.
  2. Set the GainRaw parameter to the required value.

    The minimum and maximum parameter values vary depending on the camera model, the pixel format chosen, and the Binning settings.

"Raw" and absolute Gain Values

On some camera models, the gain must be entered as a "raw" value on an integer scale. The camera needs the raw value for its internal processing mechanism. The raw value, however, isn't the same as the actual gain value, which is expressed in decibels (dB).

In the camera-specific Gain Properties table, you can find a formula to calculate the absolute gain (in dB) from the raw gain value.

Analog and Digital Gain

Analog gain is applied before the signal from the camera sensor is converted into digital values. Digital gain is applied after the conversion, i.e., it is basically a multiplication of the digitized values.

Depending on your camera model, the mechanisms to control analog and digital gain can vary:

  • For some cameras, gain control is analog up to and including a certain threshold. Above the threshold, gain control is digital.
  • For some cameras, gain control is entirely digital.
  • For some cameras, you can use the GainSelector parameter to manually switch between analog and digital gain.

Specifics

Gain Properties

Camera Model User-Settable Gain Control? Gain Control Mechanism Threshold Gain Must be Entered as ... Formula to Calculate Gain from Raw Gain Values
acA2500-20gm No Digital gain only - Raw Gain = 20 × log10(GainRaw / 136)

Gain Values

On some camera models, you can use the Remove Parameter Limitsfeature to increase the gain parameter limits.

Camera Model Minimum Gain Setting Minimum Gain Setting with Vertical BinningEnabled Maximum Gain Setting (8-bit Pixel Formats) Maximum Gain Setting (10-bit Pixel Formats) Maximum Gain Setting (12-bit Pixel Formats)
acA2500-20gm 136 136 542 542 -
// Set the "raw" gain value to 400
// If you want to know the resulting gain in dB, use the formula given in this topic
camera.Parameters[PLCamera.GainRaw].SetValue(400);

Gain Auto

The Gain Auto camera feature automatically adjusts the gain within specified limits until a target brightness value has been reached.

The pixel data for the auto function can come from one or multiple Auto Function ROIs.

If you want to use Gain Auto and Exposure Auto at the same time, use the Auto Function Profile feature to specify how the effects of both are balanced.

To adjust the gain manually, use the Gain feature.

Prerequisites

  • At least one Auto Function ROI must be assigned to the Gain Auto auto function.
  • The Auto Function ROI assigned must overlap the Image ROI, either partially or completely.

Enabling or Disabling Gain Auto

To enable or disable the Gain Auto auto function, set the GainAuto parameter to one of the following operating modes:

  • Once: The camera adjusts the gain until the specified target brightness value has been reached. When this has been achieved, or after a maximum of 30 calculation cycles, the camera sets the auto function to Off and applies the gain resulting from the last calculation to all following images.
  • Continuous: The camera adjusts the gain continuously while images are acquired. The adjustment process continues until the operating mode is set to Once or Off.
  • Off: Disables the Gain Auto auto function. The gain is set to the value resulting from the last automatic or manual adjustment.

When the camera is capturing images continuously, the auto function takes effect with a short delay. The first few images may not be affected by the auto function.

Specifying Lower and Upper Limits

The auto function adjusts the GainRaw parameter value within limits specified by you.

To change the limits, set the AutoGainRawLowerLimit and the AutoGainRawUpperLimitparameters to the desired values.

Example: Assume you have set the AutoGainRawLowerLimit parameter to 2 and the AutoGainRawUpperLimit parameter to 6. During the automatic adjustment process, the gain will never be lower than 2 and never higher than 6.

Specifying the Target Brightness Value

The auto function adjusts the gain until a target brightness value, i.e., an average gray value, has been reached.

To specify the target value, use the AutoTargetValue parameter. The parameter's value range depends on the camera model and the pixel format used.

  • The target value calculation does not include other image optimizations, e.g. Gamma. Depending on the image optimizations set, images output by the camera may have a significantly lower or higher average gray value than indicated by the target value.
  • The camera also uses the AutoTargetValue parameter to control the Exposure Auto auto function. If you want to use Gain Auto and Exposure Auto at the same time, use the Auto Function Profile feature to specify how the effects of both are balanced.

On Basler ace GigE camera models, you can also specify a Gray Value Adjustment Damping factor. On Basler dart and pulse camera models, you can specify a Brightness Adjustment Damping factor.

When a damping factor is used, the target value is reached more slowly.

Specifics

On some camera models, you can use the Remove Parameter Limitsfeature to increase the target value parameter limits.

Camera Model

Minimum Target Value

Maximum Target Value

All ace U GigE camera models

50 / 800a

205 / 3280a

// Set the the Gain Auto auto function to its minimum lower limit
// and its maximum upper limit
double minLowerLimit = camera.Parameters[PLCamera.AutoGainRawLowerLimit].GetMinimum();
double maxUpperLimit = camera.Parameters[PLCamera.AutoGainRawUpperLimit].GetMaximum();
camera.Parameters[PLCamera.AutoGainRawLowerLimit].SetValue(minLowerLimit);
camera.Parameters[PLCamera.AutoGainRawUpperLimit].SetValue(maxUpperLimit);
// Specify the target value
camera.Parameters[PLCamera.AutoTargetValue].SetValue(150);
// Select Auto Function ROI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);
// Enable Gain Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);

Gray Value Adjustment Damping

The Gray Value Adjustment Damping camera feature controls the speed with which pixel gray values are changed when Exposure Auto, Gain Auto, or both are enabled.

This feature is similar to the Brightness Adjustment Damping feature, which is only available on Basler dart and pulse camera models.

Prerequisites

The Exposure Auto or Gain Auto auto function or both must be set to Once or Continuous.

How It Works

The lower the gray value adjustment damping factor, the slower the target brightness value is reached. This can be useful, for example, to avoid the auto functions being disrupted by objects moving in and out of the camera’s area of view.

The Brightness Adjustment Damping feature, which is only available on Basler dart and pulse camera models, works vice versa: The lower the brightness adjustment damping factor, the faster the target brightness value is reached.

Specifying a Damping Factor

To specify a damping factor, adjust the GrayValueAdjustmentDampingAbs parameter value.

You can set the parameter in a range from 0.0 to 0.78125. Using higher parameter values means that the target value is reached sooner.

By default, the factor is set to 0.6836. This is a setting where the damping control is as stable and quick as possible.

// Enable Gain Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);
// Set gray value adjustment damping to 0.5859
camera.Parameters[PLCamera.GrayValueAdjustmentDampingAbs].SetValue(0.5859);

Image ROI

The Image ROI camera feature allows you to specify the part of the sensor array that you want to use for image acquisition.

ROI is short for region of interest (formerly AOI = area of interest).

If an Image ROI has been specified, the camera will only transmit pixel data from within that region. This can increase the camera's maximum frame rate significantly.

The Image ROI settings are independent from the Auto Function ROI settings.

Prerequisites

  • If you want to configure the Width and Height parameters, the camera must be idle, i.e., not capturing images.
  • If you want to configure the OffsetX parameter, the CenterX parameter must be set to false.
  • If you want to configure the OffsetY parameter, the CenterY parameter must be set to false.

Changing Position and Size of an Image ROI

With the factory settings enabled, the camera is set to a default resolution. However, you can change the position and size as required.

To change the position and size of the Image ROI:

  1. Use the following parameters to specify the position of the Image ROI:
    • OffsetX
    • OffsetY
  2. Use the following parameters to specify the size of the Image ROI:
    • Width
    • Height

The origin of the Image ROI is in the top left corner of the sensor array (column 0, row 0).

Example: Assume that you have specified the following settings:

  • OffsetX = 2
  • OffsetY = 6
  • Width = 16
  • Height = 10.

This creates the following Image ROI:

  • Decreasing the size (especially the height) of the Image ROI can increase the camera’s maximum frame rate significantly.
  • If the Binning feature is enabled, the settings for the Image ROI refer to the binned lines and columns and not to the physical lines in the sensor.

Guidelines

When you are specifying an Image ROI, follow these guidelines:

Guideline Example
OffsetX + Width ≤ SensorWidth

Camera with a 1920 x 1080 pixel sensor:

OffsetX + Width ≤ 1920

OffsetY + Height ≤ SensorHeight

Camera with a 1920 x 1080 pixel sensor:

OffsetY + Height ≤ 1080

Specifics

Image ROI Sizes

Camera Model

Minimum Width

Width Increment

Minimum Height

Height Increment

acA2500-20gm 32 32 1 1

Image ROI Offsets

Camera Model

Minimum Offset X

Offset X Increment

Minimum Offset Y

Offset Y Increment

acA2500-20gm 0 1 0 1
// Set the width to the maximum value
Int64 maxWidth = camera.Parameters[PLCamera.Width].GetMaximum();
camera.Parameters[PLCamera.Width].SetValue(maxWidth);
// Set the height to 500
camera.Parameters[PLCamera.Height].SetValue(500);
// Set the offset to 0,0
camera.Parameters[PLCamera.OffsetX].SetValue(0);
camera.Parameters[PLCamera.OffsetY].SetValue(0);

Line Debouncer

The Line Debouncer camera feature allows you to filter out invalid hardware input signals.

Only valid signals are allowed to pass through to the camera and become effective.

Prerequisites

The camera must be configured for hardware triggering.

How It Works

The line debouncer filters out  unwanted short signals (contact bounce) from the rising and falling edges of incoming hardware trigger signals. To this end, the line debouncer evaluates all changes and durations of logical states of hardware signals.

The maximum duration of this evaluation period (the "line debouncer time") is defined by the LineDebouncerTimeAbs parameter. The line debouncer acts like a clock that measures the durations of the signals to identify valid signals.

The clock starts counting whenever a hardware signal changes its logical state (high to low or vice versa). If the duration of the new logical state is shorter than the line debouncer time specified, the new logical state is considered invalid and has no effect. If the duration of the new logical state is as long as the line debouncer time or longer, the new logical state is considered valid and is allowed to become effective in the camera.

Specifying a line debouncer time introduces a delay between a valid trigger signal arriving at the camera and the moment the related change of logical state is passed on to the camera. The duration of the delay is at least equal to the value of the LineDebouncerTimeAbs parameter. This is because the camera waits for the time specified as the line debouncer time to determine whether the signal is valid. Similarly, the line debouncer delays the end of a valid trigger signal.

The figure below illustrates how the line debouncer filters out invalid signals from the rising and falling edge of a hardware trigger signal. Line debouncer times that actually allow a change of logical state in the camera are labeled "OK". Also illustrated are the delays of logical states inside the camera relative to the hardware trigger signal.

Enabling the Line Debouncer

  1. Set the LineSelector parameter to the desired input line.
  2. Enter a value for the LineDebouncerTimeAbs parameter.

Choosing the Debouncer Value

Choosing a LineDebouncerTimeAbs value that is too low results in accepting invalid signals and signal states. Choosing a value that is too high results in rejecting valid signals and signal states. Basler recommends choosing a line debouncer time that is slightly longer than the longest expected duration of an invalid signal.

There is a small risk of rejecting short valid signals but in most scenariOS this approach should deliver good results. Monitor your application and, if necessary, adjust the value if you find that too many valid signals are being rejected.

// Select the desired input line
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Set the parameter value to 10 microseconds
camera.Parameters[PLCamera.LineDebouncerTimeAbs].SetValue(10.0);

Line Inverter

The Line Inverter camera feature allows you to invert the electrical signal level of an I/O line.

All high (1) signals are converted to low (0) signals and vice versa.

Enabling the Line Inverter

Enable the line inverter only when the I/O lines are not in use. Otherwise, the camera may show unpredictable behavior.

  1. Set the LineSelector parameter to the desired I/O line.
  2. Set the LineInverter parameter to true to invert the electrical signal level of the I/O line selected or to false to disable inversion.
// Select Line 1
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Enable the line inverter for the I/O line selected
camera.Parameters[PLCamera.LineInverter].SetValue(true);

Line Logic

The Line Logic camera feature allows you to determine the logic of an I/O line.

The logic of an I/O line can either be positive or negative.

Determining the Line Logic

To determine the logic of an I/O line:

  1. Set the LineSelector parameter to the desired I/O line.
  2. Get the value of the LineLogic parameter.

    The parameter is read-only.

Line Logic Overview

Positive Line Logic

If the line logic is positive, the relation between the electrical status of an I/O line and the LineStatus parameter is as follows:

Electrical Status LineStatus Parameter Value
Voltage level high True
Voltage level low False

Negative Line Logic

If the line logic is negative, the relation between the electrical status of an I/O line and the LineStatus parameter is as follows:

Electrical Status LineStatus Parameter Value
Voltage level high False
Voltage level low True
// Select a line
camera.LineSelector.SetValue(LineSelector_Line1);
// Get the logic of the line
LineLogicEnums e = camera.LineLogic.GetValue();

Line Minimum Output Pulse Width

The Line Minimum Output Pulse Width camera feature allows you to increase the signal width ("pulse width") of an output signal in order to achieve a minimum signal width.

Increasing the camera output signal width can be necessary to suit certain receivers that may require a certain minimum signal width to be able to detect the signals.

Specifying a Line Minimum Output Pulse Width

  1. Set the LineSelector parameter to the desired camera output line.
  2. Enter a value for the LineMinimumOutputPulseWidth parameter.

How It Works

To ensure reliable detection of camera output signals, the Line Minimum Output Pulse Signal Width feature allows you to increase the output signal width to a minimum width. The minimum width is specified in microseconds, up to a maximum value of 100 μs.

  • If the original signal width is narrower than the minimum signal width specified, the signal width is increased to achieve the minimum width.
  • If the original signal width is equal to or wider than the set minimum signal width, the feature has no effect.
// Select output line Line 2
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Set the parameter value to 10.0 microseconds
camera.Parameters[PLCamera.LineMinimumOutputPulseWidth].SetValue(10.0);

Line Mode

The Line Mode camera feature allows you to configure whether an I/O line is used as input or output.

You can configure the line mode of any general purpose I/O line (GPIO line). For opto-coupled I/O lines, you can only determine the current line mode.

Configuring the Line Mode

Configure the line mode only when the I/O lines are not in use. Otherwise, the camera may show unpredictable behavior.

  1. Set the LineSelector parameter to the desired GPIO line.
  2. Set the LineMode parameter to one of the following values:
    • Input: Configures the GPIO line as an input line.
    • Output: Configures the GPIO line as an output line.

Determining the Line Mode

  1. Set the LineSelector parameter to the desired I/O line.
  2. Get the value of the LineMode parameter to determine the current line mode of the I/O line.
// Select GPIO line 3
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line3);
// Set the line mode to Input
camera.Parameters[PLCamera.LineMode].SetValue(PLCamera.LineMode.Input);
// Get the current line mode
string e = camera.Parameters[PLCamera.LineMode].GetValue();

Line Selector

The Line Selector camera feature allows you to select the I/O line that you want to configure.

Selecting a Line

To select a line, set the LineSelector parameter to the desired I/O line.

Depending on the camera model, the total number of I/O lines, the format of the lines (opto-coupled or GPIO), and the pin assignment may vary. To find out what your camera model offers, check the physical interface section in the topic about your camera model. Possible tasks depend on whether the I/O line serves as input or output.

Once you have selected a line, you can do the following:

Task Feature

Configuring the debouncer for an input line

Line Debouncer

Selecting the source signal for an output line

Line Source

Setting the minimum pulse width for an output line

Line Minimum Output Pulse Width

Setting the line status of a user settable output line

User Output Value

Setting the line mode of a GPIO line

Line Mode

Enabling the invert function

Line Inverter

Checking the status of a single I/O line

Line Status
// Select input line 1
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);

Line Source

The Line Source camera feature allows you to configure which signal to output on the I/O line currently selected.

This allows you to monitor the status of the camera or to control external devices. For example, you can monitor if the camera is currently exposing or you can control external flash lighting.

For each camera output line, you can set exactly one signal.

The camera sends all output signals with a short propagation delay. The delay is usually in the microseconds range.

Setting the Line Source

To set the line source:

  1. Set the LineSelector parameter to an opto-coupled camera output line or to a GPIOline configured as output.
  2. Set the LineSource parameter to one of the following values:
    • FrameTriggerWait (if available)
    • AcquisitionTriggerWait (if available)
    • TimerActive (if available)
    • ExposureActive (if available)
    • FlashWindow (if available)
    • UserOutput, UserOutput1, UserOutput2, or UserOutput3
    • SyncUserOutput (if available)

Available Line Source Signals

Depending on your camera model, the following line source signals are available:

Trigger Wait

You can use the camera's "Trigger Wait" signals to optimize triggered image acquisitionand to avoid overtriggering.

  • Basler strongly recommends only to use the Trigger Wait signals when the camera is configured for hardware triggering. For software triggering, use the Acquisition Status feature.
  • The Frame Burst Trigger Wait signal and the Acquisition Trigger Wait signal are identical, only the names differ. The naming depends on your camera model.

Trigger Wait signals go high when the camera is ready to receive trigger signals of the corresponding trigger type. When you apply the corresponding trigger signal, the Trigger Wait signal goes low. It stays low until the camera is ready to receive the next corresponding trigger signal.

For example, the Frame Trigger Wait signal goes high when the camera is ready to receive Frame Start trigger signals. When you apply a frame trigger signal, the signal goes low. It stays low until the camera is ready to receive the next Frame Start trigger signal:

If you operate the camera with overlapping image acquisition and the Exposure Overlap Time Max feature is available on your camera model, you can use that feature to optimize the Frame Trigger Wait signal.

Timer Active

You can use the Timer Active (or "Timer 1 Active") signal to monitor the camera's Timer feature. The signal goes high on specific camera events, e.g., on exposure start. The signal goes low after the duration specified. Optionally, you can delay the rise of the signal.

Exposure Active

If available, you can use the Exposure Active signal to monitor if the camera is currently exposing. The signal goes high when exposure starts. The signal goes low when exposure ends. On cameras configured for Rolling shutter mode, the signal goes low when exposure for the last row has ended.

The Exposure Active signal can be used to trigger a flash.

The signal is also useful in situations where either the camera or the target object is moving. For example, assume that the camera is mounted on an arm mechanism that moves the camera to different sections of a product assembly. Typically, you don't want the camera to move during exposure. In this case, you can monitor the Exposure Active signal to know when exposure is taking place. This allows you to avoid moving the camera during that time.

Flash Window

If available, you can use the Flash Window signal to determine when to use flash lighting. The signal goes high when you can start the flash lighting. The signal goes low when you should stop the flash lighting.

The signal indicates the period of time during a frame acquisition when all of the rows in the sensor are open for exposure.

Flash Window in Rolling Shutter Mode

If the camera is configured for Rolling shutter mode, Basler recommends the use of flash lighting, especially when you are capturing images of fast-moving objects. Otherwise, images can be distorted due to the temporal shift between the different exposure starts of the inpidual rows.

The following diagram illustrates the timing of the Flash Window signal when the camera is configured for Rolling shutter mode:

As shown above, in Rolling shutter mode, the Flash Window signal covers the period of time between the start of exposure of the last row (A) and the end of exposure of the first row (B).

In Rolling shutter mode, avoid extremely short exposure times or extremely large Image ROIs. Otherwise, the exposure time for the first row may end before exposure of the last row starts, i.e., (B) occurs before (A). In that case, the Flash Window signal would always be low:

Flash Window in Global Reset Release Shutter Mode

If the camera is configured for Global Reset Release shutter mode, you must use flash lighting. Otherwise, the brightness in each acquired image may vary significantly from top to bottom due to the differences in the exposure times of the rows. Also, when you are capturing images of fast-moving objects, images may be distorted due to the temporal shift between the different exposure ends of the inpidual rows.

The following diagram illustrates the timing of the Flash Window signal when the camera is configured for Global Reset Release shutter mode:

As shown above, in Global Reset Release shutter mode, the Flash Window signal spans the exposure time of the first row.

Global Shutter Mode

On cameras configured for Global shutter mode, the Flash Window signal is either not available or equivalent to the Exposure Active signal.

User Output

If an output line is configured to supply a User Output signal, you can set the status of the line by software. For more information, see the User Output Value and the User Output Value All features.

This can be useful to control external events or devices, e.g., a light source.

How to configure the output lines depends on how many User Output line sources (e.g., "User Output 1", "User Output 2") are available on your camera model.

Configuration: One User Output Line Source Available

If only one User Output line source is available ("User Output"):

  1. Set the LineSelector parameter to the desired output line.
  2. Set the LineSource parameter to UserOutput.

Now, you can use the User Output Value or the User Output Value All feature to set the status of the line by software.

Configuration: Multiple User Output Line Sources Available

If multiple User Output line sources are available (e.g., "User Output 1" and "User Output 2"):

  1. Set the LineSelector parameter to the desired output line, e.g., Line 2.
  2. Set the LineSource parameter according to the user output signal assignment. 

    Example: Assume that you selected Line 2 in step 1 and that the signal assignment is "User Output 1 → Line 2". In this case, you must set the LineSourceparameter to UserOutput1.

Now, you can use the User Output Value or the User Output Value All feature to set the status of the line by software.

Sync User Output

If available, you can use the Sync User Output signal to manually set the status of the line using the Sequencer feature.

The Sync User Output signal is similar to the User Output signal. The only difference is that Sync User Output signals can be controlled by the Sequencer feature, while the User Output signals can't.

The parameters related to the Sync User Output signals are also similar to the User Output parameters:

  • The SyncUserOutputValue parameter is the equivalent of the UserOutputValueparameter.
  • The SyncUserOutputValueAll parameter is the equivalent of the UserOutputValueAll parameter.
  • The SyncUserOutputSelector parameter is the equivalent of the UserOutputSelector parameter.

Specifics

Camera Model Available Line Sources User Output Signal Assignment
acA2500-20gm
  • Exposure Active
  • Frame Trigger Wait
  • Acquisition Trigger Wait
  • Timer 1 Active
  • User Output 1, User Output 2
  • User Output 1 → Line 2
  • User Output 2 → Line 3
// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Select the Flash Window signal as the source signal for Line 2
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.FlashWindow);

Line Status

The Line Status camera feature allows you to determine the status of an I/O line (high or low).

To determine the status of all I/O lines in a single operation, use the Line Status Allfeature.

Determining the Status of an I/O Line

To determine the status of an I/O line:

  1. Set the LineSelector parameter to the desired I/O line.
  2. Get the value of the LineStatus parameter. 

    The parameter is read-only.

A value of false (0) means that the line's status was low at the time of polling. A value of true (1) means the line's status was high at the time of polling.

If the Line Inverter feature is enabled, the camera inverts the LineStatusparameter value. A true parameter value changes to false, and vice versa.

Line Status and I/O Status

GPIO Line Configured as Input

If your camera has a GPIO line and that line is configured as input, the relation between its input status and the LineStatus parameter is as follows:

Input Status LineStatus Parameter Value
Input open (not connected) True
Voltage level low False
Voltage level high True

This means that the line logic is positive.

GPIO Line Configured as Output

If your camera has a GPIO line and that line is configured as output, the relation between its output status and the LineStatus parameter depends on your camera model.

Opto-Coupled Input Line

If your camera has an opto-coupled input line, the relation between its input status and the LineStatus parameter is as follows:

Input Status LineStatus Parameter Value
Input open (not connected) False
Voltage level low False
Voltage level high True

This means that the line logic is positive.

Opto-Coupled Output Line

If your camera has an opto-coupled output line, the relation between its output status and the LineStatus parameter is as follows:

Output Status LineStatus Parameter Value Electrical Status
0 (e.g., User Output Value set to false or Flash Window signal low) True Voltage level higha
1 (e.g., User Output Value set to true or Flash Window signal high) False Voltage level low

aAn external pull-up resistor must be installed. Otherwise, the voltage level will be undefined.

This means that the line logic is negative.

Specifics

For information about the line status on GPIO lines configured for input and on opto-coupled I/O lines, see the tables above.

For information about the line status on GPIO lines configured for output, see the following table:

  GPIO Lines Configured for Output
Camera Model Output Status LineStatus Parameter Value Electrical Status

All ace GigE camera models

0 (e.g.  User Output Value set to falseor Flash Window signal low) True Voltage level higha
1 (e.g. User Output Value set to trueor Flash Window signal high) False Voltage level low
// Select a line
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Get the status of the line
bool status = camera.Parameters[PLCamera.LineStatus].GetValue();

Line Status All

The Line Status All camera feature allows you to determine the status of all I/O lines in a single operation.

To determine the status of an inpidual I/O line, use the Line Status feature.

Determining the Status of All I/O Lines

To determine the current status of all I/O lines, read the LineStatusAll parameter. The parameter is reported as a 64-bit value.

Certain bits in the value are associated with the I/O lines. Each bit indicates the status of its associated line:

  • If a bit is 0, the status of the associated line is currently low.
  • If a bit is 1, the status of the associated line is currently high.

Which bit is associated with which line depends on your camera model.

If the Line Inverter feature is enabled, the camera inverts the LineStatusAllparameter value. All 0 bits change to 1, and vice versa.

Line Status and I/O Status

→  See the Line Status feature documentation.

Specifics

Camera Model

Bit-to-Line association

acA2500-20gm
  • Bit 0 indicates Line1 status
  • Bit 1 indicates Line2 status
  • Bit 2 indicates Line3 status

Example: All lines high = 0b111

// Get the line status of all I/O lines. Because the GenICam interface does not
// support 32-bit words, the line status is reported as a 64-bit value.
Int64 lineStatus = camera.Parameters[PLCamera.LineStatusAll].GetValue();

LUT

The LUT camera feature allows you to replace the pixel values in your images by values defined by you.

This is done by creating a user-defined lookup table (LUT).

You can also use the LUT Value All feature to replace all pixel values in a single operation.

How It Works

LUT is short for "lookup table", which is basically an indexed list of numbers. For Basler cameras, you can create a "luminance lookup table" to change the pixel values, i.e., the luminance or gray values, in your images.

In the lookup table you can define replacement values for inpidual pixel values. For example, you can replace a gray value of 4 095 (= maximum gray value for 12-bit pixel formats) by a gray value of 0 (= minimum gray value). This changes all completely white pixels in your images to completely black pixels.

Setting up a LUT can be useful, e.g., if you want to optimize the luminance of your images. By defining the replacement values in advance and storing them in the camera's LUT you avoid time-consuming calculations by your application. Instead, the camera can simply look up the desired new value in the LUT based on the pixel’s initial value.

Creating the LUT

  1. Set the LUTIndex parameter to the pixel value that you want to replace with a new value.
  2. Set the LUTValue parameter to the new pixel value.
  3. Repeat steps 1 and 2 for all pixel values that you want to change.
  4. Set the LUTEnable parameter to true.

Basler recommends to use a programming loop (e.g., a for-loop) to iterate through the values. See the sample code below.

If you want to change all pixel values, Basler recommends to use the LUT Value All feature for faster execution.

Limitations

On all Basler cameras, a user-defined LUT can store up to 512 entries. This size is not sufficient to include all possible pixel values (e.g., 4 096 entries for a 12-bit pixel format, 1 024 entries for a 10-bit pixel format).

Therefore, the following limitations apply:

  • On cameras with a maximum pixel bit depth of 12 bit, you can only enter values for the LUTIndex parameter that are multiples of eight (0, 8, 16, 24, ..., 4088). This means that only pixel values of 0, 8, 16, 24, and so on, can be replaced.
  • On cameras with a maximum pixel bit depth of 10 bit, you can only enter values for the LUTIndex parameter that are multiples of two (0, 2, 4, 6, ..., 1022). This means that only pixel values of 0, 2, 4, 6, and so on, can be replaced.

To determine the remaining pixel values, the camera performs a straight line interpolation.

Example: Assume that the camera uses a 12-bit pixel format. Also assume that you have created a LUT that converts a gray value of 24 to a gray value of 20 and a gray value of 32 to a value of 30. In this case, the camera determines the pixel values between 24 and 32 as follows:

Original Pixel Value

Value Stored in LUT

Interpolated Value

New Pixel Value (Rounded)

24

20

20

20

25

-

21.25

21

26

-

22.5

22

27

-

23.75

23

28

-

25

25

29

-

26.25

26

30

-

27.5

27

31

-

28.75

28

32

30

30

30

  • Pixel values above 4088 are not interpolated. Instead, all pixel values between 4 088 and 4 095 are replaced by the pixel value entered at LUT index position 4 088.
  • If your camera supports 12-bit pixel formats, but currently uses an 8-bit pixel format, you will still be able to enter LUT indexes and values between 0 and 4088. The camera uses these values to do a 12-bit to 12-bit conversion. Then, it drops the 4 least significant bits of the converted values and transmits the 8 most significant bits.

Additional Parameters

The LUTSelector parameter allows you to select a lookup table.

Because there is only one user-defined lookup table available on Basler cameras, the parameter currently serves no function.

// Write a lookup table to the device.
// The following lookup table causes an inversion of the pixel values
// (bright -> dark, dark -> bright)
// Only applies to cameras with a maximum pixel bit depth of 12 bit
for (int i=0; i<4096; i+=8)
{
   camera.LUTIndex.SetValue(i);
   camera.LUTValue.SetValue(4095-i);
}
// Enable the LUT
camera.LUTEnable.SetValue(true);

LUT Value All

The LUT Value All camera feature allows you to replace all pixel values in your images by values defined by you.

This is done by replacing the entire user-defined lookup table (LUT).

To replace inpidual entries in the lookup table, use the LUT feature.

How It Works

LUT is short for "lookup table", which is basically an indexed list of numbers. For more information, see the LUT feature description.

While the LUT feature allows you to change inpidual entries in the lookup table, the LUT Value All feature allows you to change all entries in the lookup table in a single operation.

In many cases, this is faster than repeatedly changing inpidual entries in the LUT.

To change all entries in the lookup table use the LUTValueAll parameter. The parameter structure depends on the maximum pixel bit depth of your camera.

12-bit Camera Models

On cameras with a maximum pixel bit depth of 12 bit, the LUTValueAll parameter is a register that consists of 4096 x 4 bytes. Each 4-byte word represents a LUTValueparameter value.

The LUTValue parameter values are sorted by the LUTIndex number in ascending order (0 through 4095).

Example:

  • As shown above, only every eighth 4-byte word (0, 8, 16, 24, ..., 4088) is actually used by the camera. The other 4-byte words are ignored. This is because the camera's internal LUT can only process 512 entries.
  • The endianness of the 4-byte words (LUT values) depends on your camera model.

10-bit Camera Models

On cameras with a maximum pixel bit depth of 10 bit, the LUTValueAll parameter is a register that consists of 1024 x 4 bytes. Each 4-byte word represents a LUTValueparameter value.

The LUTValue parameter values are sorted by the LUTIndex number in ascending order (0 through 1024).

Example:

  • As shown above, only every second 4-byte word (0, 2, 4, 6, ..., 1022) is actually used by the camera. The other 4-byte words are ignored. This is because the camera's internal LUT can only process 512 entries.  
  • The endianness of the 4-byte words (LUT values) depends on your camera model.

Setting or Getting All LUT Values

To set all entries in the lookup table:

  1. Set the LUTValueAll parameter to the desired value.

    Make sure to apply the correct endianness of the 4-byte words (LUT values).

  2. Set the LUTEnable parameter to true.

To get all entries in the lookup table:

  1. Get the value of the LUTValueAll parameter. 

    Observe the endianness of the 4-byte words (LUT values).

The LUTValueAll parameter is not available in the pylon Viewerapplication. You can only set or get the parameter via the pylon API.

Specifics

Camera Model

Endianness of the 4-Byte Words (LUT Values)

All ace GigE camera models Big-endian
// Write a lookup table to the device
// The following lookup table inverts the pixel values
// (bright -> dark, dark -> bright)
// Only applies to cameras with a maximum pixel bit depth of 12 bit
// Note: This is a simplified code sample.
// You should always check the camera interface and 
// the endianness of your system before using LUTValueAll.
// For more information, see the 'LUTValueAll' code sample
// in the C++ Programmer's Guide and Reference Documentation
// delivered with the Basler pylon Camera Software Suite.
uint32_t lutValues[4096];
for (int i=0; i<4096; i+=8)
{
   lutValues[i] = 4095-i;
}
camera.LUTValueAll.SetValue(lutValues);
// Enable the LUT
camera.LUTEnable.SetValue(true);

PGI Feature Set

The PGI feature set allows you to optimize the quality of your images.

The main purpose of the PGI feature set is to optimize images to meet the needs of human vision. It combines up to four image optimization processes.

How it Works

Depending on your camera model, a selection of the following image optimizations will be performed:

Noise Reduction

The noise reduction (also called "denoising") reduces random variations in brightness or color information in your images.

Sharpness Enhancement

The sharpness enhancement increases the sharpness of the images. The higher the sharpness, the more distinct the contours of the image objects will be. This is especially useful in applications where cameras must correctly identify numbers or letters.

5×5 Demosaicing

5×5 demosaicing (also called "debayering") carries out color interpolation on regions of 5×5 pixels on the sensor and is therefore more elaborate than the "simple" 2×2 demosaicing used otherwise by the camera.

Color Anti-Aliasing

Color errors, especially on sharp edges and in sections of the image with high spatial frequencies, are a common side effect of demosaicing algorithms. Even colorless structures can suddenly appear to have color. The color anti-aliasing optimization analyzes and corrects the discolorations.

For more information about the PGI image optimizations, see the Better Image Quality with Basler PGI white paper.

Enabling the PGI Feature Set

Automatic

On some camera models, the PGI feature set is enabled automatically whenever the pixel format is set to a non-Bayer color pixel format, i.e., to one of the available RGB, BGR, or YUV pixel formats.

Manual

On some camera models, you must manually enable the PGI feature set. To do so:

  1. Set the pixel format to a non-Bayer color pixel format, i.e., to one of the available RGB, BGR, or YUV pixel formats. On some camera models the PGI feature set is only available for one of the YUV pixel formats.
  2. Set the DemosaicingMode parameter to BaslerPGI.

Setting the PGI Image Optimizations

Once you have enabled the PGI feature set, you can configure the inpidual image optimization processes.

Which image optimizations are available and can be configured depends on your camera model.

Configuring Noise Reduction

If this optimization is configurable, you can use the NoiseReductionAbs parameter to specify the desired noise reduction. The higher the parameter value, the more noise reduction is applied.

If this optimization is not configurable, noise reduction is applied automatically.

Noise reduction is best used together with sharpness enhancement. If the parameter value is set too high, fine structure in the image can become indistinct or even disappear.

Configuring Sharpness Enhancement

If this optimization is configurable, you can use the SharpnessEnhancementAbsparameter to specify the desired sharpness enhancement. The higher the parameter value, the more sharpening is applied.

If this optimization is not configurable, sharpness enhancement is applied automatically.

In most cases, best results are obtained at low parameter value settings and when using noise reduction at the same time.

Configuring 5×5 Demosaicing

If available, 5×5 demosaicing is performed automatically whenever the PGI feature set is enabled. You can't configure this optimization.

Configuring Color Anti-Aliasing

If available, color anti-aliasing is performed automatically whenever the PGI feature set is enabled. You can't configure this optimization.

Specifics

Camera Model

Enabling PGI Feature Set

Available Image Optimizations Configurable Image Optimizations
acA2500-20gm Manual
  • Noise Reduction
  • Sharpness Enhancement
  • Noise Reduction
  • Sharpness Enhancement
// Enable the PGI feature set
camera.Parameters[PLCamera.DemosaicingMode].SetValue(PLCamera.DemosaicingMode.BaslerPGI);
// Configure noise reduction (if available)
camera.Parameters[PLCamera.NoiseReductionAbs].SetValue(0.2);
// Configure sharpness enhancement (if available)
camera.Parameters[PLCamera.SharpnessEnhancementAbs].SetValue(1.0);

Pixel Format

The Pixel Format camera feature allows you to choose the format of the image data transmitted by the camera.

There are different pixel formats depending on the model of your camera and whether it is a color or a mono camera.

Detailed information about pixel formats can be found in the GenICam Pixel Format Naming Convention 2.1.

Prerequisites

The camera must be idle, i.e., not capturing images. Otherwise, the Pixelformatparameter is read-only.

Choosing a Pixel Format

To choose a pixel format, set the PixelFormat parameter to one of the following values:

  • MonoXX (e.g., Mono10, Mono12p) (if available)
  • BayerXXYY (e.g., BayerBG10, BayerGR12) (if available)
  • YCbCr422_8, YUV422Packed, YUV422_YUYV_Packed (if available)
  • RGB8, BGR8 (if available)

Determining the Pixel Format

To determine the pixel format currently used by the camera, read the value of the PixelFormat parameter.

Available Pixel Formats

Mono Formats

If a monochrome camera uses one of the mono pixel formats, it outputs 8, 10, or 12 bits of data per pixel.

If a color camera uses the Mono 8 pixel format, the values for each pixel are first converted to the YUV color model. The Y component of this model represents a brightness value and is equivalent to the value that would be derived from a pixel in a monochrome image. So in essence, when a color camera is set to Mono 8, it outputs an 8-bit monochrome image. This type of output is sometimes referred to as "Y Mono 8".

Bayer Formats

Color cameras are equipped with a Bayer color filter and can output color images based on the Bayer pixel formats given below.

If a color camera uses one of these Bayer pixel formats, it outputs 8, 10, or 12 bits of data per pixel. The pixel data is not processed or interpolated in any way. For each pixel covered with a red filter, you get 8, 10, or 12 bits of red data. For each pixel covered with a green filter, you get 8, 10, or 12 bits of green data. For each pixel covered with a blue filter, you get 8, 10, or 12 bits of blue data. This type of pixel data is sometimes referred to as "raw" output.

YUV Formats

Color cameras can also output color images based on pixel data in YUV (or YCbCr) format.

If a color camera uses this format, each pixel value in the captured image goes through a conversion process as it exits the sensor and passes through the camera. This process yields Y, U, and V color information for each pixel value.

The values for U and V normally range from -128 to +127. Because the camera transfers U values and V values with unsigned integers, 128 is added to each U value and V value before they are transferred from the camera. This way, values from 0 to 255 can be transferred.

RGB and BGR Formats

When a color camera uses the RGB 8 or BGR 8 pixel format, the camera outputs 8 bit of red data, 8 bit of green data, and 8 bit of blue data for each pixel in the acquired frame.

The pixel formats differ by the output sequences for the color data (red, green, blue or blue, green, red).

Maximum Pixel Bit Depth

The maximum pixel bit depth is defined by the pixel format with the highest bit depth among the pixel formats available on your camera.

Example: If the available pixel formats for your camera are Mono 8 and Mono 12, the maximum pixel bit depth of the camera is 12 bit.

unpacked and Packed Pixel Formats

When a camera uses an unpacked pixel format (e.g., Bayer 12), pixel data is always 8-bit aligned. Padding bits (zeros) are inserted as necessary to reach the next 8-bit boundary.

Example (simplified):

Assume that you have chosen a 12-bit unpacked pixel format. The camera outputs 16 bits per pixel: 12 bits of pixel data and 4 padding bits to reach the next 8-bit boundary.

When a camera uses a packed pixel format (e.g., Bayer 12p), pixel data is not aligned. This means that no padding bits are inserted and that one byte can contain data of multiple pixels.

Example (simplified):

Assume that you have chosen a 12-bit packed pixel format. The camera outputs 12 bits per pixel. As a consequence, data for two pixels is always spread over 3 bytes.

The exact data alignment depends on the pixel format. You can find detailed information in the GenICam Pixel Format Naming Convention 2.1.

External Links

  • GenICam Pixel Format Naming Convention 2.1 (European Machine Vision Association)
  • Bayer filter (Wikipedia)

Specifics

Camera Model

Available Pixel Formats

acA2500-20gm
  • Mono 8
  • Mono 10
  • Mono 10 Packed
// Set the pixel format to Mono 8
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);

Precision Time Protocol

The Precision Time Protocol (PTP) camera feature allows you to synchronize multiple GigE cameras in the same network.

The protocol is defined in the IEEE 1588 standard. Basler cameras support the revised version of the standard (IEEE 1588-2008, also known as PTP Version 2).

The precision of the PTP synchronization depends to a large extent on your network hardware and setup. For maximum precision, choose high-quality network hardware, use PTP-enabled network switches, and add an external PTP clock device with a GPS receiver to your network.

Why Use PTP

The Precision Time Protocol (PTP) feature enables a camera to use the following features:

  • Scheduled Action Commands
  • Synchronous Free Run

How it Works

Through PTP, multiple devices (e.g., cameras) are automatically synchronized with the most accurate clock found in a network, the so-called master clock or best master clock.

The protocol enables systems within a network to do the following:

  • Synchronize with the master clock, i.e., set the local clock as precisely as possible to the time of the master clock.
  • Syntonize with the master clock, i.e., adjust the frequency of the local clock to the frequency of the master clock. You want the duration of a second to be as identical as possible on both devices.

The master clock is determined by several criteria. The most important criterion is the device's priority 1 setting. The network device with the lowest Priority 1 setting is the master clock. On all Basler cameras, the Priority 1 setting is preset to 128 and can't be changed. If your PTP network setup consists only of Basler cameras, the master clock will be chosen based on the device's MAC address.

For more information about the master clock criteria, see the IEEE 1588-2008 specification, clause 7.6.2.2.

Timestamp Synchronization

The basic concept of the Precision Time Protocol (IEEE 1588) is based on the exchange of PTP messages. These messages allow the slave clocks to synchronize their timestamp value with the timestamp value of the master clock. When the synchronization has been completed, the GevTimestampValue parameter value on all GigE devices will be as identical as possible. The precision highly depends on your network hardware and setup.

IEEE 1588 defines 80-bit timestamps for storing and transporting time information. Because GigE Vision uses 64-bit timestamps, the PTP timestamps are mapped to the 64-bit timestamps of GigE Vision.

If no device in the network is synchronized to a coordinated world time (e.g., UTC), the network will operate in the arbitrary timescale mode (ARB). In this mode, the epoch is arbitrary, as it is not bound to an absolute time. The timescale is relative and only valid in this network.

  • A PTP-enabled device that operates in a network with no other PTP-enabled devices does not discipline its local clock. The precision of its local clock is identical to a non-PTP-enabled device.
  • If you use a PTP-enabled switch in two-step boundary mode, in rare cases, no camera clock may reach the master state. If this happens, Basler recommends that you set Priority 1 in the switch settings to a value greater than 128. This priority setting ensures that a PTP port is able to be in the slave state and a connected camera is able to become a master.
  • For more information about the Precision Time Protocol, see the Synchronous and in Real Time: Multi-Camera Applications in the GigE Network white paper.

Enabling PTP Clock Synchronization

When powering on the camera, PTP is always disabled. If you want to use PTP, you must enable it.

To enable PTP:

  1. If you want to use an external device as the master clock (e.g., a GPS receiver), configure the external device as the master clock. 

    Basler recommends an ANNOUNCE Interval of 2 seconds and a SYNC interval of 0.5 seconds.

  2. On all cameras that you want to synchronize via PTP, set the GevIEEE1588parameter to true.
  3. Wait until all PTP network devices are sufficiently synchronized. Depending on your network setup, this may take a few seconds or minutes.

    You can determine when the devices are synchronized by checking the status of the PTP clock synchronization.

Now, you can use the Scheduled Action Commands feature and the Synchronous Free Run feature.

Enabling PTP clock synchronization changes the camera's internal tick frequency from 125 MHz (= 8 ns tick duration) to 1 GHz (= 1 ns tick duration).

The Inter-packet Delay and the Frame Transmission Delay parameter values are adjusted automatically.

Checking the Status of the PTP Clock Synchronization

To check the status of the PTP clock synchronization, you must develop your own check method using the pylon API.

These guidelines may help you in developing a suitable method:

  • The GevIEEE1588datasetLatch command allows you to take a "snapshot" of the camera's current PTP clock synchronization status. The "snapshot" implementation ensures that all parameter values specified (see below) refer to exactly the same point in time.
  • After executing the GevIEEE1588DataSetLatch command, you can read the following parameter values from the device:
    • GevIEEE1588OffsetFromMaster: A 32-bit number. Indicates the temporal offset between the master clock and the clock of the current IEEE 1588 device in ticks.
    • GevIEEE1588ClockId: A 64-bit number. Indicates the unique ID of the current IEEE 1588 device (the "clock ID").
    • GevIEEE1588parentClockId: A 64-bit number. Indicates the clock ID of the IEEE 1588 device that currently serves as the master clock (the "parent clock ID").
    • GevIEEE1588StatusLatched: An enumeration. Indicates the state of the current IEEE 1588 device, e.g., whether it is a master or a slave clock. The returned values match the IEEE 1588 PTP port state enumeration (Initializing, Faulty, Disabled, Listening, Pre_Master, Master, Passive, Uncalibrated, and Slave). For more information, refer to the pylon API documentation and the IEEE 1588-2008 specification.
    • GevIEEE1588Status: An enumeration. Serves as an alternative to the GevIEEE1588StatusLatched parameter. Returns the same values, but does not require executing the GevIEEE1588DataSetLatch command beforehand. However, if you read multiple PTP-related values from a device, the GevIEEE1588Status parameter value will not relate to the same point in time as the other values.
  • A typical implementation of a PTP status check involves executing the GevIEEE1588DataSetLatch command and checking the GevIEEE1588OffsetFromMaster parameter value repeatedly on all slave cameras until the highest offset from master is below a certain threshold (defined by you, e.g., 1 millisecond).
  • Due to the fact that the clock is continuously adjusted by a control mechanism, the offset is not decreasing monotonically. Basler recommends that you check the maximum GevIEEE1588OffsetFromMaster parameter value within a given time window:

External Links

  • Precision Time Protocol (Wikipedia)
  • White Paper: Synchronous and in Real Time: Multi-Camera Applications in the GigE Network (Basler)
  • IEEE 1588-2008 Specification (IEEE Standards Association)
// Enable PTP on the current device
camera.Parameters[PLCamera.GevIEEE1588].SetValue(true);
// To check the status of the PTP clock synchronization,
// implement your own check method here.
// For guidelines, see section "Checking the Status of
// the PTP Clock Synchronization" in this topic.

Remove Parameter Limits

The Remove Parameter Limits camera feature allows you to remove the factory limits of certain camera features.

When the factory limits are removed, extended parameter value ranges are available.

How It Works

Normally, a parameter's allowed value range is limited. These factory limits are designed to ensure optimum camera performance and, in particular, good image quality. For certain use cases, however, you may want to specify parameter values outside of the factory limits. This is where the ability to remove parameter limits comes in useful.

Which parameter limits can be removed depends on your camera model.

Removing a Parameter Limit

To remove a parameter limit:

  1. Select the parameter whose limits you want to remove in the RemoveParameterLimitSelector.
  2. Set the RemoveParameterLimit parameter to true.

Specifics

Camera Model Removable Parameter Limits
acA2500-20gm
  • Gain
  • ExposureTime
  • AutoTargetValue
// Select the Gain parameter
camera.Parameters[PLCamera.RemoveParameterLimitSelector].SetValue(PLCamera.RemoveParameterLimitSelector.Gain);
// Remove the limits of the selected parameter
camera.Parameters[PLCamera.RemoveParameterLimit].SetValue(true);

Resulting Frame Rate

The Resulting Frame Rate camera feature allows you to determine the maximum frame rate with the current camera settings.

This is useful, for example, if you want to know how long you have to wait between triggers.

The frame rate is expressed in frames per second (fps).

Why Check the Resulting Frame Rate

Optimizing the Frame Rate

When the camera is configured for free run image acquisition and continuous acquisition, knowing the resulting frame rate is useful if you want to optimize the frame rate for your imaging application. You can adjust the camera settings limiting the frame rate until the resulting frame rate reaches the desired value.

For example, if your imaging application requires 30 fps and the current resulting frame rate is 25 fps, you can reduce the Image ROI height until the resulting frame rate reaches 30 fps.

Optimizing Triggered Image Acquisition

When the camera is configured for triggered image acquisition, knowing the resulting frame rate is useful if you want to trigger the camera as often as possible without overtriggering. You can calculate how long you must wait after each trigger signal by taking the reciprocal of the resulting frame rate: 1 / Resulting Frame Rate.

Example: If the resulting frame rate is 12.5, you must wait for a minimum of 1/12.5 = 0.08 seconds after each trigger signal. Otherwise, the camera ignores the trigger signal and generates a Frame Start Overtrigger event.

Checking the Resulting Frame Rate

To check the resulting frame rate, i.e., the maximum frame rate with the current camera settings, read the value of the ResultingFrameRateAbs parameter. The value is expressed in frames per second (fps).

The parameter value takes all factors limiting the frame rate into account.

  • Checking the resulting frame rate works when the camera is idle as well as when the camera is acquiring images.
  • If you want to check the resulting frame rate of Basler cameras that aren't connected to your computer at the moment, use the online Basler Frame Rate Calculator.

Factors Limiting the Frame Rate

Several factors may limit the frame rate on any Basler camera:

  • Acquisition Frame Rate: If the Acquisition Frame Rate feature is enabled, the camera's frame rate is limited by the acquisition frame rate. For example, if you enable the Acquisition Frame Rate feature and set an acquisition frame rate of 10 fps, the camera's frame rate will never be higher than 10 fps.
  • Bandwidth Assigned: Bandwidth limitations can reduce the frame rate, especially when more than one camera is connected to a computer. The extent of the reduction depends on the camera model, the number of cameras connected to the computer, and the computer’s hardware.
  • DeviceLinkThroughputLimitMode: If this parameter is available and set to On, it limits the maximum available bandwidth for data transmission to the DeviceLinkThroughputLimit parameter value. This also limits the frame rate.
  • Exposure Time: If you use very long exposure times, you can acquire fewer frames per second.
  • Field Output Mode: If available, using the Field 0 Output Mode or Field 1 Output Mode increases the camera’s frame rate.
  • Image ROI: If you use a large Image ROI, you can acquire fewer frames per second. On most cameras, decreasing the Image ROI height increases the frame rate. On some cameras, decreasing the Image ROI width also increases the frame rate.
  • Parameter Limits: On some camera models, you can use the Remove Parameter Limits feature to remove the frame rate limitation.
  • Sensor Readout Mode: On some cameras, you can enable a fast sensor readout mode that allows you to increase the frame rate. This can, however, have adverse effects on image quality.
  • Shutter Mode: If the Global Reset Release shutter mode is selected, overlapped image acquisition is not possible. This decreases the camera’s maximum frame rate.
  • Stacked Zone Imaging: If available, using the Stacked Zone Imaging feature increases the camera’s frame rate.

External Links

  • Basler Frame Rate Calculator (baslerweb.com)
// Get the resulting frame rate
double d = camera.Parameters[PLCamera.ResultingFrameRate].GetValue();

Reverse X and Reverse Y

The Reverse X and Reverse Y camera features allow you to mirror acquired images horizontally, vertically, or both.

Reverse X is available on all camera models. Reverse Y is available on selected camera models.

Enabling Reverse X

To enable Reverse X, set the ReverseX parameter to true.

The camera mirrors the image horizontally:

Enabling Reverse Y

On some camera models, the Reverse Y feature is also available.

To enable Reverse Y, set the ReverseY parameter to true.

The camera mirrors the image vertically:

Using Image ROIs or Auto Function ROIs with Reverse X or Reverse Y

If you have specified an Image ROI or Auto Function ROI while using Reverse X or Reverse Y, you have to bear in mind that the position of the ROI relative to the sensor remains the same.

As a consequence, the camera captures different portions of the image depending on whether the Reverse X or the Reverse Y feature are enabled:

Effective Bayer Filter Alignments (Color Cameras Only)

Depending on your camera model, the Bayer filter alignment changes when Reverse X, Reverse Y, or both are used.

For example, if you use a camera with a physical Bayer BG filter alignment and enable Reverse X, the actual Bayer filter alignment will be Bayer GB. The PixelFormat parametervalue changes accordingly.

Specifics

Camera Model Reverse X Available Reverse Y Available Changes in Bayer Filter Alignment
acA2500-20gm Yes Yes N/A (mono camera)
// Enable Reverse X
camera.Parameters[PLCamera.ReverseX].SetValue(true);
// Enable Reverse Y, if available
camera.Parameters[PLCamera.ReverseY].SetValue(true);

Scheduled Action Commands

The Scheduled Action Commands camera feature allows you to send action commandsthat are executed in multiple cameras at exactly the same time.

If exact timing is not a critical factor in your application, you can use the Action Commands feature instead.

How It Works

The basic parameters of the Scheduled Action Command feature are the same as for the Action Commands feature:

  • Action device key
  • Action group key
  • Action group mask
  • Broadcast address

In addition to these parameters, the Scheduled Action Command feature uses the following parameter:

Action Time

A 64-bit GigE Vision timestamp used to define when the action is to be executed.

The action is executed as soon as the internal timestamp value of a camera reaches the specified value.

With the Precision Time Protocol enabled, the timestamp value is synchronized across all cameras in the network. As a result, the action will be executed on all cameras in the network at exactly the same time.

The value must be entered in ticks. On Basler cameras with the Precision Time Protocolfeature enabled, one tick equals one nanosecond.

Example: Assume you issue a scheduled action command with the action time set to 100 000 000 000. The action will be executed as soon as the timestamp value of all cameras in the specified network segment reaches 100 000 000 000.

If 0 (zero) is entered or if the action time is set to a time in the past, the action command will be executed immediately, equivalent to a standard action command.

Using Scheduled Action Commands

Configuring the Cameras

Follow the procedure outlined in the Action Commands topic.

Issuing a Scheduled Action Command

General Use

To issue a scheduled action command:

  1. Make sure that all cameras in your network are synchronized via the Precision Time Protocol feature.
  2. Call the IssueScheduledActionCommand method in your application.

    The parameters are similar to the IssueActionCommand method. The only difference is the additional Action Time parameter.

    Example:

Issuing a Scheduled Action Command to Be Executed after a Certain Delay

To issue a scheduled action command that is executed after a certain delay:

  • The following steps must be performed using the pylon API.
  • Because there is an unspecified delay between transmission and execution of the pylon API commands, the desired delay can't be achieved exactly.
  1. Make sure that all cameras in your network are synchronized via the Precision Time Protocol feature.
  2. Execute the GevTimestampControlLatch command on one of your cameras. If one of your cameras serves as the PTP master clock, use this camera.

    A "snapshot" of the camera’s current timestamp value is taken.

  3. Get the value of the GevTimestampValue parameter on the same camera.

    The value is given in ticks. On Basler cameras with the Precision Time Protocolfeature enabled, one tick equals one nanosecond.

  4. Call the IssueScheduledActionCommand method with the action time set to the value determined in step 3 plus the desired delay in ticks (= nanoseconds).

    For example, if you want the command executed after roughly 30 seconds, set the action time to GevTimestampValue + 30 000 000 000.

All cameras in the network segment will execute the command simultaneously after the given delay.

Issuing a Scheduled Action Command to Be Executed at a Precise Point in Time

To issue a scheduled action command that is executed at a precise point in time:

  1. Make sure that all cameras in your network are synchronized via the Precision Time Protocol feature to a time standard, e.g., Coordinated Universal Time (UTC).

    This can be achieved, e.g., by integrating a IEEE 1588-enabled UTC clock device in your network.

  2. Call the IssueScheduledActionCommand method with the action time set to a coordinated time value.

    For example, if your cameras are synchronized to UTC, you can set the action time to 1 765 537 200 000 000 000 to execute the action command exactly on Fri Dec 12 2025 11:00:00 UTC.

// Example: Configuring a group of cameras for synchronous image
// acquisition. It is assumed that the "cameras" object is an 
// instance of CBaslerGigEInstantCameraArray.
//--- Start of camera setup ---
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
    // Open the camera connection
    cameras[i].Open();
    // Configure the trigger selector
    cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
    // Select the mode for the selected trigger
    cameras[i].TriggerMode.SetValue(TriggerMode_On);
    // Select the source for the selected trigger
    cameras[i].TriggerSource.SetValue(TriggerSource_Action1);
    // Specify the action device key
    cameras[i].ActionDeviceKey.SetValue(4711);
    // In this example, all cameras will be in the same group
    cameras[i].ActionGroupKey.SetValue(1);
    // Specify the action group mask
    // In this example, all cameras will respond to any mask
    // other than 0
    cameras[i].ActionGroupMask.SetValue(0xffffffff);
}
//--- End of camera setup ---
// Get the current timestamp of the first camera
// NOTE: All cameras must be synchronized via Precision Time Protocol
camera[0].GevTimestampControlLatch.Execute();
int64_t currentTimestamp = camera[0].GevTimestampValue.GetValue();
// Specify that the command will be executed roughly 30 seconds 
// (30 000 000 000 ticks) after the current timestamp.
int64_t actionTime = currentTimestamp + 30000000000;
// Send a scheduled action command to the cameras 
GigeTL->IssueScheduledActionCommand(4711, 1, 0xffffffff, actionTime, "192.168.1.255");

Sensor Readout Mode

The Sensor Readout Mode camera feature allows you to choose between sensor readout modes that provide different sensor readout times.

Decreasing the sensor readout time can increase the camera's frame rate.

To configure the sensor readout mode, set the SensorReadoutMode parameter value one of the following values:

  • Normal: The readout time for each row of pixels remains unchanged.
  • Fast: The readout time for each row of pixels is reduced, compared to normal readout. Accordingly, the overall sensor readout time is reduced and the camera can operate at higher frame rates. This can, however, result in reduced image quality.
// Set the sensor readout mode to Fast
camera.Parameters[PLCamera.SensorReadoutMode].SetValue(PLCamera.SensorReadoutMode.Fast);
// Get the current sensor readout mode
string e = camera.Parameters[PLCamera.SensorReadoutMode].GetValue();

Sensor Readout Time

The Sensor Readout Time camera feature allows you to determine the amount of time it takes to read out the data of an image from the sensor.

This feature only provides a very rough estimate of the sensor readout time. If you want to optimize the camera for triggered image acquisition or for overlapping image acquisition, use the Resulting Frame Rate feature instead.

Why Determine the Sensor Readout Time

Each image acquisition process includes two parts:

  1. Exposure of the pixels of the imaging sensor, i.e., the exposure time.
  2. Readout of the pixel values from the sensor, i.e., the sensor readout time.

The Sensor Readout Time feature is useful if you want to estimate which part of the image acquisition process is limiting the camera's frame rate.

To do so, compare the exposure time with the sensor readout time:

  • If the sensor readout time is considerably longer than the exposure time, consider adjusting camera features that affect the sensor readout time, e.g., Binning, Decimation, or Image ROI.
  • If the exposure time is considerably longer than the sensor readout time, consider reducing the exposure time.

Determining the Sensor Readout Time

To determine the sensor readout time under the current settings, read the value of the ReadoutTimeAbs parameter. The sensor readout time is measured in microseconds.

The result is only an approximate value and depends on various camera settings and features, e.g., Binning, Decimation, or Image ROI.

// Determine the sensor readout time under the current settings
double d = camera.Parameters[PLCamera.ReadoutTimeAbs].GetValue();

Sequencer (GigE Cameras)

The Sequencer (GigE Cameras) camera feature allows you to define up to 64 sets of parameter settings, called sequence sets, and apply them to a sequence of image acquisitions.

As the camera acquires images, it applies one sequence set after the other. This enables you to quickly change camera parameters without compromising the maximum frame rate.

For example, you can use the Sequencer feature to quickly change between preconfigured Image ROIs or exposure times.

For a description of the Sequencer feature for USB 3.0 cameras, click here.

Prerequisites

All auto functions (e.g., Gain Auto, Exposure Auto) must be set to Off.

Enabling or Disabling the Sequencer

When enabled, the sequencer controls image acquisitions. It can't be configured in this state.

When disabled, the sequencer can be configured but is not controlling image acquisitions.

To enable the sequencer, set the SequenceEnable parameter to true.

How to disable the sequencer depends on your camera model:

  • If the SequenceConfigurationMode parameter is available:
    1. Set the SequenceEnable parameter to false.
    2. Set the SequenceConfigurationMode parameter to On.
  • If the SequenceConfigurationMode parameter is not available:
    1. Set the SequenceEnable parameter to false.

What's in a Sequence Set?

Configuring Sequence Sets

  • To configure sequence sets, the sequencer must be disabled.
  • All changes made to sequence sets are lost when the camera is disconnected from power. Also, sequencer settings can't be saved in a user set. In order to preserve your settings, Basler recommends that you write suitable program code using the pylon API to re-populate the sequence sets every time the camera is powered on.

Before you can use the Sequencer feature, you must populate the sequence sets with your desired settings. Each sequence set has a unique sequence set index number, ranging from 0 to 63.

To populate the sequence sets:

  1. Set the SequenceSetTotalNumber parameter to the total number of sequence sets you want to use.
  2. Configure the sequence set parameters that you want to store in sequence set 0.
  3. Save sequence set 0.
  4. Repeat steps 2 and 3 for all sequence sets you want to use.

    Make sure to always use a continuous series of index numbers starting with index number 0, e.g., use sequence sets 0, 1, 2, and 3.

Example: Assume you need two sequence sets and want to populate them with different Image ROI settings. To do so:

  1. Set the SequenceSetTotalNumber parameter to 2.
  2. Create the first Image ROI by adjusting the Width, Height, OffsetX, and OffsetYparameter values.
  3. Save sequence set 0.
  4. Create the second Image ROI by choosing different values for the Width, Height,OffsetX, and OffsetY parameters.
  5. Save sequence set 1.

You can now configure the sequencer to quickly change between the two Image ROIs.

Saving a Sequence Set

To save a sequence set:

  1. Set the SequenceSetIndex parameter to the desired sequence set.
  2. Execute the SequenceSetStore command.

The values of all sequence set parameters are stored in the selected sequence set.

Loading a Sequence Set

Sequence sets are loaded automatically during sequencer operation. However, loading a sequence set manually can be useful for testing purposes or when configuring the sequencer.

To manually load a sequence set:

  1. Set the SequenceSetIndex parameter to the desired sequence set.
  2. Execute the SequenceSetLoad command.

The values of all sequence set parameters are overwritten and replaced by the values stored in the selected sequence set.

Configuring the Sequencer

After you have configured the sequence sets, you must configure the sequencer.

  • To configure the sequencer, the sequencer must be disabled.
  • All changes made to the sequencer configuration are lost when the camera is disconnected from power. Also, sequencer settings can't be saved in a user set. Basler recommends that you write suitable program code using the pylon API to reconfigure the camera every time it is powered on.
  • You can use the SequenceSetIndex chunk to keep track of the sequence sets used. When enabled, each image contains chunk data including the index number of the sequence set used for image acquisition.

The sequencer can be operated in three modes, called "advance modes":

  • Auto sequence advance mode
  • Controlled sequence advance mode
  • Free selection advance mode

In all modes, sequence sets always advance in ascending order, starting from sequence set index number 0.

Auto Sequence Advance Mode

This mode is useful if you want to configure a fixed sequence which is repeated continuously.

You can enable this mode by setting the SequenceAdvanceMode parameter to Auto.

In this mode, the advance from one sequence set to the next occurs automatically as Frame Start trigger signals are received.

The SequenceSetTotalNumber parameter specifies the total number of sequence sets to be used. After the sequence set with the highest index number has been used, the cycle starts again at 0.

Example: Assume you want to configure the following sequence cycle:

To configure the above sequence cycle:

  1. Set the SequenceAdvanceMode parameter to Auto.
  2. Set the SequenceSetTotalNumber parameter to 5.

Using Sequence Sets Multiple Times

Optionally, each sequence set can be used several times in a row.

To specify how many times you want to use each sequence set:

  1. Load the desired sequence set.
  2. Configure the SequenceSetExecutions parameter for this sequence set. 

    By default, the parameter is set to 1 for all sets which means that each sequence set is used once per cycle.

  3. Save the sequence set.

Example: Assume you want to configure the following sequence cycle:

To configure the above sequence cycle:

  1. Set the SequenceAdvanceMode parameter to Auto.
  2. Set the SequenceSetTotalNumber parameter to 6.
  3. Configure the SequenceSetExecutions parameter for each sequence set:
    • Sequence sets 0, 2, 3, and 4 are to be used only once per cycle. Therefore, you can skip these sets and leave the SequenceSetExecutions parameter at the default value of 1.
    • Sequence set 1 is to be used three times in a row. Load sequence set 1, set the SequenceSetExecutions parameter to 3, and save sequence set 1.
    • Sequence set 5 is to be used two times in a row. Load sequence set 5, set the SequenceSetExecutions parameter to 2, and save sequence set 5.

Controlled Sequence Advance Mode

This mode is useful if you want to configure a dynamic sequence which can be controlled via line 1 or software commands.

  • For real-time applications, Basler strongly recommends not to control the sequencer via software commands. The delay between sending a software command and it becoming effective depends on the specific installation and the current load on the network. Therefore, it's impossible to predict how many image acquisitions may occur between sending the software command and it becoming effective.
  • When you're controlling the sequencer using line 1, you have to bear in mind that it takes one microsecond between setting the status of the line and the rise of the Frame Start trigger signal. You also have to maintain the status of the line for at least one microsecond after the Frame Start trigger signal has risen. Monitor the Frame Trigger Wait signal to optimize the timing.

You can enable this mode by setting the SequenceAdvanceMode parameter to Controlled.

As in the other modes, the advance always proceeds in ascending order, starting from sequence set index number 0.

You can, however, control the following:

  • Sequence set advance: When do you want the sequencer to advance to the next sequence set?
  • Sequence set restart: When  do you want the sequence cycle to start again from sequence set 0?

The SequenceSetTotalNumber parameter specifies the total number of sequence sets you want to use. After the sequence set with the highest index number has been used, the cycle starts again at 0.

Configuring Sequence Set Advance

To configure sequence set advance:

  1. Set the SequenceControlSelector parameter to Advance.
  2. Set the SequenceControlSource parameter to one of the following options:
    • Line1: Sequence set advance will be controlled via line 1. If line 1 is low (0) while a Frame Start trigger signal is received, the sequencer does not advance and the current sequence set is used again for image acquisition. If line 1 is high (1) while a Frame Start trigger signal is received, the sequencer advances and the next sequence set in the cycle is used for image acquisition.
    • Disabled: Sequence set advance will be controlled using the SequenceAsyncAdvance software command. When this command is received, the sequencer advances without acquiring an image. When the next Frame Start trigger signal is received, the sequence set indicated by the SequenceCurrentSet parameter value is used for image acquisition.
    • AlwaysActive: The sequencer behaves as if Line1 was selected and line 1 was always high (1). As a result, the sequencer advances every time a Frame Start trigger signal is received. This way of operating the sequencer is similar to operating it in auto sequence advance mode when each sequence set is used only once per cycle. The only difference is that sequence set 1 is used as the first sequence set instead of sequence set 0.

Configuring Sequence Set Restart

To configure sequence set restart:

  1. Set the SequenceControlSelector parameter to Restart.
  2. Set the SequenceControlSource parameter to one of the following options:
    • Line1: Sequence set restart will be controlled via line 1. If line 1 is low (0) while a Frame Start trigger signal is received, the next sequence set is used. If line 1 is high (1) while a Frame Start trigger signal is received, the sequence cycle is restarted and sequence set 0 is used.
    • Disabled: Sequence set restart will be controlled using the SequenceAsyncrestart software command. When this command is received, the sequence cycle is restarted without acquiring an image. When the next Frame Start trigger signal is received, sequence set 0 is used.

Free Selection Advance Mode

This mode is useful if you want to quickly change between freely selectable sequence sets without having to observe any particular order. You use the input lines of your camera to determine the sequence.

Bear in mind that it takes one microsecond between setting the status of the line and the rise of the Frame Start trigger signal. You also have to maintain the status of the line for at least one microsecond after the Frame Start trigger signal has risen. Monitor the Frame Trigger Wait signalto optimize the timing.

How to configure free selection advance mode depends on how many input lines are available on your camera:

Cameras with One Input Line

Sequence sets are chosen according to the status of input line 1:

  • If line 1 is low (0) while a Frame Start trigger signal is received, sequence set 0 is used for image acquisition.
  • If line 1 is high (1) while a Frame Start trigger signal is received, sequence set 1 is used for image acquisition.

Only sequence sets 0 and 1 are available.

To enable free selection advance mode:

  1. Set the SequenceAdvanceMode parameter to FreeSelection.
  2. Set the SequenceSetTotalNumber parameter to 2.

The SequenceAddressBitSelector and SequenceAddressBitSource parameters also control the operation of the free selection advance mode. However, these parameters are preset and can’t be changed.

Cameras with Two Input Lines

Sequence sets are chosen according to the status of line 1 (opto-coupled input line) and line 3 (GPIO line, must be configured as input), resulting in four possible combinations. This allows you to choose between four sequence sets. Consequently, only sequence sets 0, 1, 2, and 3 are available.

In order to configure the free selection advance mode, you must assign a "sequence set address bit" to each line. The combinations of these address bits determine the sequence set index number. The following table shows the possible combinations and their respective outcomes.

Address Bit 1 Address Bit 0 Sequence Set That Will Be Selected
0 0 Sequence set 0
0 1 Sequence set 1
1 0 Sequence set 2
1 1 Sequence set 3

For example, you can assign line 1 to bit 1 and line 3 to bit 0. This results in the following sample configuration:

  • If line 1 is low (0) and line 3 is low (0) while a Frame Start trigger signal is received, sequence set 0 is used for image acquisition.
  • If line 1 is low (0) and line 3 is high (1) while a Frame Start trigger signal is received, sequence set 1 is used for image acquisition.
  • If line 1 is high (1) and line 3 is low (0) while a Frame Start trigger signal is received, sequence set 2 is used for image acquisition.
  • If line 1 is high (1) and line 3 is high (1) while a Frame Start trigger signal is received, sequence set 3 is used for image acquisition.

To configure the bits and enable free selection advance mode:

  1. Set the SequenceAdvanceMode parameter to FreeSelection.
  2. Set the SequenceSetTotalNumber parameter to 4.
  3. Set the SequenceAddressBitSelector parameter to Bit0.
  4. Set the SequenceAddressBitSource parameter to the line that you want to assign to bit 0, e.g., Line3.
  5. Set the SequenceAddressBitSelector parameter to Bit1.
  6. Set the SequenceAddressBitSource parameter to the line that you want to assign to bit 1, e.g., Line1.

You can also use only one input line in free selection advance mode. To do so, set the SequenceSetTotalNumber parameter to 2. Now, only bit 0 is used to choose a sequence set. The free selection advance mode will behave as described under "Cameras with One Input Line".

Timing Diagrams

Specifics

Camera Model SequenceConfigurationMode Parameter Available
acA2500-20gm

Yes

/*Configuring sequence sets*/

camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Configure the parameters that you want to store in the first sequence set
camera.Parameters[PLCamera.Width].SetValue(500);
camera.Parameters[PLCamera.Height].SetValue(300);
// Select sequence set 0 and save the parameter values
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(0);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Configure the parameters that you want to store in the second sequence set
camera.Parameters[PLCamera.Width].SetValue(800);
camera.Parameters[PLCamera.Height].SetValue(600);
// Select sequence set 1 and save the parameter values
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(1);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
/*Configuring the sequencer for auto sequence advance mode
Assuming you want to configure the following sequence cycle:
0 - 0 - 1 - 1 - 1 (- 0 - 0 - ...)*/

camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.Auto);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Load sequence set 0 and specify that this set is to be used
// 2 times in a row
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(0);
camera.Parameters[PLCamera.SequenceSetLoad].Execute();
camera.Parameters[PLCamera.SequenceSetExecutions].SetValue(2);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Load sequence set 1 and specify that this set is to be used
// 3 times in a row
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(1);
camera.Parameters[PLCamera.SequenceSetLoad].Execute();
camera.Parameters[PLCamera.SequenceSetExecutions].SetValue(3);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
/*Configuring the sequencer for controlled sequence advance mode*/

camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.Controlled);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Specify that sequence set advance is controlled via line 1
camera.Parameters[PLCamera.SequenceControlSelector].SetValue(PLCamera.SequenceControlSelector.Advance);
camera.Parameters[PLCamera.SequenceControlSource].SetValue(PLCamera.SequenceControlSource.Line1);
// Specify that sequence set restart is controlled
// via software command
camera.Parameters[PLCamera.SequenceControlSelector].SetValue(PLCamera.SequenceControlSelector.Restart);
camera.Parameters[PLCamera.SequenceControlSource].SetValue(PLCamera.SequenceControlSource.Disabled);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
// Restart the sequencer via software command (for testing purposes)
camera.Parameters[PLCamera.SequenceAsyncRestart].Execute();
/*Configuring the sequencer for free selection advance mode
on cameras with ONE input line*/

camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.FreeSelection);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
/*Configuring the sequencer for free selection advance mode
on cameras with TWO input lines (1x opto-coupled, 1x GPIO set for input)*/

camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.FreeSelection);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(4);
// Assign sequence address bit 0 to line 3
camera.Parameters[PLCamera.SequenceAddressBitSelector].SetValue(PLCamera.SequenceAddressBitSelector.Bit0);
camera.Parameters[PLCamera.SequenceAddressBitSource].SetValue(PLCamera.SequenceAddressBitSource.Line3);
// Assign sequence address bit 1 to line 1
camera.Parameters[PLCamera.SequenceAddressBitSelector].SetValue(PLCamera.SequenceAddressBitSelector.Bit1);
camera.Parameters[PLCamera.SequenceAddressBitSource].SetValue(PLCamera.SequenceAddressBitSource.Line1);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);

Shutter Mode

The Shutter Mode camera feature allows you to determine or configure the operating mode of the camera's electronic shutter.

The shutter mode refers to the way in which image data is captured and processed. Which shutter modes are available depends on the design of the imaging sensor.

Determining the Shutter Mode

To determine the current shutter mode, get the value of the ShutterMode parameter. The parameter can take the following values:

  • Global: The camera is operating in Global shutter mode.
  • Rolling: The camera is operating in Rolling shutter mode.
  • GlobalResetRelease: The camera is operating in Global Reset Release shutter mode.

Configuring the Shutter Mode

If multiple shutter modes are available on your camera model, you can choose the desired shutter mode.

To do so, set the ShutterMode parameter to one of the following values:

  • Global: Enables the Global shutter mode.
  • Rolling: Enables the Rolling shutter mode.
  • GlobalResetRelease: Enables the Global Reset Release shutter mode.

Advantages and Disadvantages

Shutter Mode

Advantage

Disadvantage

Global shutter mode

Well suited for capturing fast-moving objects

Higher ambient noise

Rolling shutter mode

Lower ambient noise

Image distortion can occur if very fast-moving objects are captured.

Global Reset Release shutter mode
  • Lower ambient noise
  • Well suited for capturing fast-moving objects

Flash lighting must be used.

Available Shutter Modes

Depending on your camera model, the following shutter modes are available:

Global Shutter Mode

During every image acquisition in Global shutter mode, all of the sensor's pixels start exposing at the same time and also stop exposing at the same time. Immediately after the end of exposure, pixel data readout begins and proceeds row by row until all pixel data has been read. This is particularly useful if you want to capture fast moving objects or if the camera is moving rapidly while capturing images.

Cameras that operate in the Global shutter mode can provide an Exposure Active output signal. The signal goes high when exposure begins and goes low when exposure ends.

The sensor readout time is the sum of all row readout times. Therefore, the sensor readout time is influenced by the Image ROI height. You can determine the readout time by checking the value of the camera’s ReadoutTimeAbs parameter.

On some camera models, the Sensor Readout Mode feature is available. This feature allows you to reduce the sensor readout time.

Rolling Shutter Mode

In Rolling shutter mode, the camera exposes the pixel rows one after the other, with a temporal offset (tRow) from one row to the next. With this method, the ambient noise is typically significantly lower than with the global shutter method.

When frame start is triggered, the camera resets the first row and begins exposing it. For most cameras, this row is the first row of the Image ROI. For some cameras, the first row exposed is always the first row of the sensor, regardless of the Image ROI settings.

A short time later (= 1 x tRow), the camera resets the second row and begins exposing that row. After another short time (= 1 x tRow), the camera resets the third row and begins exposing that row.

This continues until a last row of pixels is reached. For most cameras, this row is the last row of the Image ROI. For some cameras, the last row exposed is always the last row of the sensor, regardless of the Image ROI settings.

The length of tRow varies by camera model.

The pixel values for each row are read out at the end of the exposure time of each row. The exposure time is the same for all rows. Because the readout time for each row is also tRow, the temporal shift for the end of readout is identical to the temporal shift for the start of exposure.

The sensor readout time is the sum of all row readout times: tRow x Image ROI height.

Therefore, the sensor readout time also depends on the Image ROI height. To determine the readout time, check the value of the camera’s ReadoutTimeAbs parameter.

Other Factors Influencing the Frame Period

Besides the exposure time and the sensor readout time, there are other factors influencing the frame period, e.g., the time needed to prepare the sensor for the next acquisition.

These other factors vary by camera model and configuration. Therefore, Basler recommends calculating the frame period. To do so, check the value of the camera's ResultingFrameRateAbs parameter value and take its reciprocal:

1 / resulting frame rate

This takes all influencing factors into account.

Possible Image Distortion (Rolling Shutter Effect)

If the object or the camera is moving very fast during image capture in Rolling shutter mode, image distortion may occur. This is also known as the rolling shutter effect.

This is due to the temporal shift between the start of exposure of the inpidual rows.

To prevent the rolling shutter effect, Basler recommends to use flash lighting. Most cameras can supply a Flash Window output signal to facilitate the use of flash lighting.

Exposure Active Signal

If your camera model provides an Exposure Active output signal and the camera is configured for Rolling shutter mode, the Exposure Active signal goes high when the exposure time for line one begins and goes low when the exposure time for the last line ends. This means that the signal width is greater than the exposure time.

Global Reset Release Shutter Mode

The Global Reset Release (GRR) shutter mode is a variant of the Rolling shutter mode. It combines the advantages of the Global and the Rolling shutter mode.

In GRR shutter mode, all of the pixels in the sensor start exposing at the same time. However, at the end of exposure, there is a temporal offset (tRow) from one row to the next.

The tRow values are the same as for the Rolling shutter mode and vary by camera model.

If the camera is operated in the GRR shutter mode, you must use flash lighting. Otherwise, the brightness in the acquired images will vary significantly from top to bottom due to the differences in the exposure times of the inpidual rows. Also, when you are capturing images of fast moving objects, images can be distorted due to the temporal shift caused by the different exposure end times of the inpidual rows.

Most cameras can supply a Flash Window output signal to facilitate the use of flash lighting.

Other Factors Influencing the Frame Period

→ See Other Factors Influencing the Frame Period for Rolling shutter mode.

Additional Parameters

Depending on your camera model, the GlobalResetReleaseModeEnable parameter may also be available.

  • If you set the parameter to true, the camera sets the ShutterMode parameter to GlobalResetRelease and enables the Global Reset Release shutter mode.
  • If you set the parameter to false, the camera sets the ShutterMode parameter to Rolling and enables the Rolling shutter mode.

External Links

  • White Paper: Global Shutter, Rolling Shutter - Functionality and Characteristics of Two Exposure Methods (Basler)
  • Rolling Shutter (Wikipedia)
  • Simulation of the rolling shutter effect on a rotating propeller and a moving car (Wikipedia)

Specifics

Camera Model Available Shutter Modes Temporal Offset tRow[μs] Additional Parameters
acA2500-20gm Global - None
// Determine the current shutter mode
string shutterMode = camera.Parameters[PLCamera.ShutterMode].GetValue();
// Set the shutter mode to rolling
camera.Parameters[PLCamera.ShutterMode].SetValue(PLCamera.ShutterMode.Rolling);
// Set the shutter mode to global reset release
camera.Parameters[PLCamera.ShutterMode].SetValue(PLCamera.ShutterMode.GlobalResetRelease);

Stacked ROI

The Stacked ROI camera feature allows you to define multiple zones of varying heights and equal width on the sensor array that will be transmitted as a single image.

Only the pixel data from those zones will be transmitted. This increases the camera's frame rate.

The Stacked ROI feature is similar to the Stacked Zones Imaging feature, which is only available on ace classic cameras.

Prerequisites

  • The camera must be idle, i.e., not capturing images.
  • The Sequencer feature must be disabled.

How It Works

The Stacked ROI feature allows you to define vertically aligned zones of equal width on the sensor array. The maximum number of zones depends on your camera model.

When an image is acquired, only the pixel information from within the defined zones is read out of the sensor. The pixel information is then stacked together and transmitted as a single image.

The zones always have the same width and are vertically aligned. To configure the zones, Basler recommends the following procedure:

  1. Define a width and a horizontal offset that is valid for all zones.
  2. Define the heights and vertical offsets of the inpidual zones.

Configuring the ROI Zones

  1. Set the OffsetX parameter to the desired horizontal offset. The value is applied toall zones.
  2. Set the Width parameter to the desired zone width. The value is applied to all zones.
  3. Set the ROIZoneSelector parameter to the zone that you want to configure, e.g., Zone0.
  4. Set the ROIZoneOffset parameter to the desired vertical offset. The value is applied to the zone selected in step 4.
  5. Set the ROIZoneSize parameter to the desired zone height. The value is applied to the zone selected in step 4.
  6. Set the ROIZoneMode parameter to On to enable the zone.
  7. Repeat steps 3 to 6 for every zone you want to configure.

Considerations When Using the Stacked ROI Feature

  • You can enable the zones in any order you like. For example, you can enable zones 1, 3, and 5 and disable zones 0, 2, and 4.
  • You can place the zones freely around the sensor. For example, you can place zone 0 near the bottom, zone 2 near the top, and zone 1 in the middle. However, the camera always starts reading out and transmitting pixel data from the topmost zone on the sensor and then proceeds towards the bottom.
  • You can define vertically overlapping zones. If two zones overlap, they are transmitted as a single merged zone. The pixel data from the area of overlap is read out and transmitted only once.
  • When at least one zone has been defined, the following parameters become read-only:
    • OffsetY. The parameter is set to the vertical offset of the topmost zone.
    • Height. The parameter is set to the height of the final image, i.e., the sum of the heights of all zones.
    • CenterY.
    • All parameters related to the Sequencer feature.
  • If you have configured a zone and then enable binning, the position and the size of the zone are adapted automatically. The parameter values are pided by the corresponding binning factor and rounded down.
  • If you disable all zones after using the Stacked ROI feature, the size and position of the Image ROI is set to the size and position of the zone that was disabled last. For example, assume zones 0, 1, and 2 are enabled. Then, you disable the zones in the following order: 2, 1, 0. As a result, the size and position of the Image ROI is set to the size and position of the disabled zone 0.

Specifics

Camera Model Maximum Number of ROI Zones
acA2500-20gm 8
// Configure width and offset X for all zones
camera.Parameters[PLCamera.Width].SetValue(200);
camera.Parameters[PLCamera.OffsetX].SetValue(100);
// Select zone 0
camera.Parameters[PLCamera.ROIZoneSelector].SetValue(PLCamera.ROIZoneSelector.Zone0);
// Set the vertical offset for the selected zone
camera.Parameters[PLCamera.ROIZoneOffset].SetValue(100);
// Set the height for the selected zone
camera.Parameters[PLCamera.ROIZoneSize].SetValue(100);
// Enable the selected zone
camera.Parameters[PLCamera.ROIZoneMode].SetValue(PLCamera.ROIZoneMode.On);
// Select zone 1
camera.Parameters[PLCamera.ROIZoneSelector].SetValue(PLCamera.ROIZoneSelector.Zone1);
// Set the vertical offset for the selected zone
camera.Parameters[PLCamera.ROIZoneOffset].SetValue(250);
// Set the height for the selected zone
camera.Parameters[PLCamera.ROIZoneSize].SetValue(200);
// Enable the selected zone
camera.Parameters[PLCamera.ROIZoneMode].SetValue(PLCamera.ROIZoneMode.On);

Synchronous Free Run

The Synchronous Free Run camera feature allows you to capture images on multiple cameras at the same time and the same frame rate.

How It Works

If you are using multiple cameras in free run mode, image acquisition is slightly asynchronous due to a variety of reasons, e.g., the camera's inpidual timings and delays.

The Synchronous Free Run feature allows you to synchronize cameras in free run mode. As a result,  the cameras will acquire images at the same time and at the same frame rate.

Also, you can use the Synchronous Free Run feature to capture images with multiple cameras in precisely time-aligned intervals, i.e., in a chronological sequence. For example, you can configure one camera to start image acquisition at a specific point in time. Then you configure another camera to start 100 milliseconds after the first camera and a third camera to start 200 milliseconds after the first camera:

Also, you can configure the cameras to acquire images at the same time and the same frame rate, but with different exposure times:

Using Synchronous Free Run

General Use

To synchronize multiple cameras:

  1. Make sure that all cameras in your network are synchronized via the Precision Time Protocol feature.
  2. Open the connection to one of the cameras that you want to synchronize using Synchronous Free Run.
  3. Enable free run image acquisition on this camera.
  4. Enter the desired frame rate for the SyncFreeruntimerTriggerRate parameter. 

    You must specify the same parameter value on all cameras. For example, to synchronize the cameras at 10 frames per second, you must set the parameter to 10 on all cameras.

  5. Set the SyncFreeRunTimerStartTimeHigh and the SyncFreeRunTimerStartTimeLow parameters to 0.
  6. Execute the SyncFreeRunTimerUpdate command.
  7. Set the SyncFreeRunTimerEnable parameter to true.
  8. Repeat steps 2 to 7 for all cameras.

Synchronous Free Run With Time-Aligned Intervals

To synchronize multiple cameras with time-aligned intervals, i.e., in a chronological sequence:

The following steps must be performed using the pylon API.

  1. Make sure that all cameras in your network are synchronized via the Precision Time Protocol feature.
  2. Open the connection to the first camera in the chronological sequence.
  3. Enable free run image acquisition on the camera.
  4. Enter the desired frame rate for the SyncFreeRunTimerTriggerRate parameter. 

    You must specify the same parameter value on all synchronized cameras. For example, if you want the cameras to acquire 10 frames per second, you must set the parameter to 10 on all cameras.

  5. Determine the start time of the first camera:
    1. Execute the GevTimestampControlLatch command on the first camera.

      A "snapshot" of the camera’s current timestamp value is taken.

    2. Get the value of the GevTimestampValue parameter on the same camera.

      The value is specified in ticks. On Basler cameras with the Precision Time Protocol feature enabled, one tick equals one nanosecond.

    3. Add a start delay in ticks (= nanoseconds) to the value determined in step b.

      For example, to specify a start delay of 1 second, add 1 000 000 000 to the value determined in step b.

      The delay is required because the first camera must wait until the other cameras have been configured properly:

  6. Convert the value determined in step 5 to start time high and start time low valuesand set the SyncFreeRunTimerStartTimeHigh and the SyncFreeRunTimerStartTimeLow parameters accordingly.
  7. Execute the SyncFreeRunTimerUpdate command.
  8. Set the SyncFreeRunTimerEnable parameter to true.
  9. Open the connection to the next camera in the chronological sequence.
  10. Enable free run image acquisition on this camera.
  11. Enter the desired frame rate for the SyncFreeRunTimerTriggerRate parameter. 

    You must specify the same parameter value on all synchronized cameras. For example, if you want the cameras to acquire 10 frames per second, set the parameter to 10 on all cameras.

  12. Add the desired interval (in nanoseconds) to the start time of the first camera (determined in step 5).

    For example, if you want the camera to start image acquisition 100 milliseconds after the first camera, add 100 000 000 to the value determined in step 5.

  13. Convert the value determined in step 12 to start time high and start time low values and configure the SyncFreeRunTimerStartTimeHigh and the SyncFreeRunTimerStartTimeLow parameters accordingly.
  14. Execute the SyncFreeRunTimerUpdate command.
  15. Set the SyncFreeRunTimerEnable parameter to true.
  16. Repeat steps 9 to 15 for all remaining cameras.

Converting the 64-bit Timestamp to Start Time High and Start Time Low

The start time for the Synchronous Free Run feature must be specified as a 64-bit GigE Vision timestamp value (in nanoseconds), split in two 32-bit values.

The high part of the 64-bit value must be transmitted using the SyncFreeRunTimerStartTimeHigh parameter.

The low part of the 64-bit value must be transmitted using the SyncFreeRunTimerStartTimeLow parameter.

Example: Assume your network devices are coordinated to UTC and you want to configure Fri Dec 12 2025 11:00:00 UTC as the start time. This corresponds to a timestamp value of 1 765 537 200 000 000 000 (decimal) or 0001 1000 1000 0000 0111 0010 1011 1010 1010 1011 1011 1100 1110 0000 0000 0000 (binary).

The high and low parts of this value are as follows:

Therefore, to configure a start time of Fri Dec 12 2025 11:00:00 UTC, you must set the SyncFreeRunTimerStartTimeHigh parameter to 411 071 162 and the SyncFreeRunTimerStartTimeLow parameter to 2 881 282 048.

// Example: Configuring cameras for synchronous free run.
// It is assumed that the "cameras" object is an 
// instance of CBaslerGigEInstantCameraArray.
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
    // Open the camera connection
    cameras[i].Open();
    // Make sure the Frame Start trigger is set to Off to enable free run
    cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
    cameras[i].TriggerMode.SetValue(TriggerMode_Off);
    // Let the free run start immediately without a specific start time
    camera.SyncFreeRunTimerStartTimeLow.SetValue(0);
    camera.SyncFreeRunTimerStartTimeHigh.SetValue(0);
    // Spcify a trigger rate of 30 frames per second
    cameras[i].SyncFreeRunTimerTriggerRateAbs.SetValue(30.0);
    // Apply the changes
    cameras[i].SyncFreeRunTimerUpdate.Execute();
    // Enable Synchronous Free Run
    cameras[i].SyncFreeRunTimerEnable.SetValue(true);
}

Temperature State

The Temperature State camera feature indicates whether the camera's internal temperature is normal or too high.

When the temperature is too high, the camera operates in over temperature mode and immediate cooling is required.

How It Works

Information about the internal temperature is provided by two parameters:

  • The DeviceTemperature parameter value shows the current core board temperature.
  • The TemperatureState parameter value tells you the camera's current internal temperature state:
    • Ok: The device temperature is within the normal operating temperature range.
    • Critical: The device temperature is close to or at the allowed maximum. Provide cooling. The camera operates in over temperature mode.
    • Error: The device temperature is above the allowed maximum. Provide cooling immediately. The camera operates in over temperature mode.

Over Temperature Mode

When the temperature state parameter value is Critical or Error, the camera operates in over temperature mode. This mode provides of a set of mechanisms that alert the user and help to protect the camera.

The mechanisms take effect at different device temperatures, depending on the alert level and on whether the camera is heating up (heating path) or cooling down (cooling path).

Normal camera operation requires that the temperature state stays at Okand the housing temperature stays within the allowed range. To ensure this, follow the guidelines set out in the Environmental Requirementssection of your camera model's topic.

At elevated temperatures, the camera may be damaged, the camera's lifetime is shortened, and image quality can degrade. The lifetime is also shortened by frequent high-temperature incidents.

Heating Path in Over Temperature Mode

Critical Temperature Level

When the device temperature reaches the critical temperature threshold, the camera is close to becoming too hot.

In this situation, the following happens:

  • The TemperatureState parameter value changes from Ok to Critical.
  • The camera sends an CriticalTemperature event.

Another CriticalTemperature event can only be sent after the device temperature has fallen to at least 4 °C below the critical temperature threshold.

Over Temperature Level

When the device temperature reaches the over temperature threshold, the camera is too hot. The camera must be cooled immediately. Otherwise, the camera may be damaged irreversibly.

In this situation, the following happens:

  • The camera's current draw reduces.
  • Image acquisition stops and test image 2 appears instead.
  • The TemperatureState parameter value changes from Ok to Error.
  • The camera sends an OverTemperature event.
  • If the Error Code feature is available on your camera model, the camera reports an over temperature error code.
  • Powering down the camera is meant to protect the camera by allowing it to cool. However, if the surrounding temperature is sufficiently high, the camera's internal temperature will stay high regardless or even increase further. Therefore, you should also do the following:
    • Take immediate action to improve heat dissipation in order to quickly leave the Over Temperature state.
    • Provide more efficient heat dissipation to ensure that the camera never returns to the Over Temperature state.
  • Another OverTemperature event can only be sent after the device temperature has fallen to at least 4 °C below the over temperature threshold.

Cooling Path in Over Temperature Mode

Over Temperature Level

When the device temperature falls below the over temperature threshold, the following happens:

  • The TemperatureState parameter value changes from Error to Critical.

When the device temperature falls to 4 °C below the over temperature threshold, the following happens:

  • Test image 2 disappears.
  • Image acquisition resumes with the same settings and features as before the camera entered the Error state. The exception is the Sequencer feature which you have to re-enable manually.

When the device temperature falls below the critical temperature threshold, the following happens:

  • The TemperatureState parameter value changes to Ok.

The camera's temperature state and internal temperature are normal and therefore allow normal camera operation.

Determining the Temperature State

  1. Get the TemperatureState parameter value.
  2. If the parameter value is Critical or Error, the camera operates in over temperature mode, and you must cool the camera until the parameter value is Ok.

To make full use of the Temperature State feature:

  • Get the DeviceTemperature parameter value to determine the exact core board temperature.
  • Enable the Event Notification feature to receive events whenever the camera gets too hot.
  • If the Error Code feature is available on your camera model, read the LastErrorparameter value to determine whether the camera is in over temperature mode.

Additional Parameters

The camera also provides a TemperatureSelector parameter. This allows you to choose the location within the device where the temperature is measured.

On Basler cameras, the parameter is preset to Coreboard and can't be changed.

Specifics

Camera Model Critical Temperature Threshold Over Temperature Threshold
acA2500-20gm 72 °C (161.6 °F) 78 °C (172.4 °F)
// Get the current temperature state parameter value.
string e = camera.Parameters[PLCamera.TemperatureState].GetValue();
// Get the current device temperature parameter value.
double d = camera.Parameters[PLCamera.DeviceTemperature].GetValue();

Test Images

The Test Images camera feature allows you to check the camera's basic functionality and its ability to transmit images.

Test images can be used for maintenance purposes and failure diagnostics. They are generated by the camera itself. Therefore, the optics or the imaging sensor of the camera are not involved in their creation.

Displaying Test Images

  1. Select a test image by setting the TestImageSelector parameter to one of the following values:
    • Testimage1
    • Testimage2
    • Testimage3
    • Testimage4
    • Testimage5
    • Testimage6 (if available)
  2. Acquire at least one image to display the selected test image. If you want to display the test image in the pylon Viewer, click the single or continuous shot button in the toolbar.

Available Test Images

Depending on your camera model, the following test images are available

// Select test image 1
camera.Parameters[PLCamera.TestImageSelector].SetValue(PLCamera.TestImageSelector.Testimage1);
// Acquire images to display the selected test image
// ...
// (Insert your own image grabbing routine here.
// For example, the InstantCamera class provides the StartGrabbing method.)

Timer

The Timer camera feature allows you to configure a timer output signal that goes high on specific camera events and goes low after a specific duration.

This is how the timer works:

  • A trigger source event that starts the internal timer occurs.
  • A delay begins to expire.
  • When the delay has expired, the timer output signal goes high and stays high for the duration that you have configured.
  • When the signal's duration has expired, the timer output signal goes low.

Configuring the Timer

  1. Set the LineSelector parameter to the output line that you want to use for the timer signal. If the line is a GPIO line, the line must be configured as output.
  2. Set the LineSource parameter to TimerActive.
  3. Set the TimerTriggerSource parameter to one of the available trigger source events:
    • ExposureStart: The timer starts when the exposure starts.
    • Flashwindowstart: The timer starts when the flash window opens.
  4. Set the TimerDurationAbs parameter to the desired timer duration in microseconds.
  5. Set the TimerDelayAbs parameter to the desired timer delay in microseconds.

On some camera models, you may have to increase the maximum timer duration and timer delay values.

Increasing the Maximum Timer Duration and Delay

On some camera models, the TimerDurationAbs and TimerDelayAbs parameters are limited to a default maximum value of 4 095.

To increase the maximum timerduration on these models:

  1. Divide the desired timer duration by 4 095 and round up the result to the nearest integer. Example: Assume you want to set a timer duration of 50 000 µs. 50000 / 4095 = 12.21 ≈ 13.
  2. Set the TimerDurationTimebaseAbs parameter to the value determined in step 1, in this case 13.
  3. Set the TimerDurationAbs parameter to the desired timer duration, in this case 50 000.

To increase the maximum timerdelay on these models:

  1. Divide the desired timer delay by 4 095 and round up the result to the nearest integer. Example: Assume you want to set a timer delay of 6 000 µs. 6000 / 4095 = 1.47 ≈ 2.
  2. Set the TimerDelayTimebaseAbs parameter to the value determined in step 1, in this case 2.
  3. Set the TimerDelayAbs parameter to the desired timer delay, in this case 6 000.

Depending on the TimerDurationTimebaseAbs and TimerDelayTimebaseAbs parameter values, the camera may not be able to achieve the exact timer duration and delay desired.

For example, if you set the TimerDurationTimebaseAbs parameter to 13, the camera can only achieve timer durations that are a multiple of 13. Therefore, if you set the TimerDurationAbs parameter to 50 000 and the TimerDurationTimebaseAbs parameter to 13, the camera will automatically change the setting to the nearest possible value (e.g., 49 998, which is the nearest multiple of 13).

Additional Parameters

Depending on your camera model, the following additional parameters are available:

  • TimerDurationTimebaseAbs: Allows you to increase the maximum timer duration.
  • TimerDurationRaw: Used internally to calculate the timer duration. You don't need to configure this parameter. If the parameter is available, the camera calculates the timer duration as follows: TimerDurationRaw x TimerDurationTimebaseAbs = TimerDurationAbs.
  • TimerDelayTimebaseAbs: Allows you to increase the maximum timer delay.
  • TimerDelayRaw: Used internally to calculate the timer delay. You don't need to configure this parameter. If the parameter is available, the camera calculates the timer delay as follows: TimerDelayRaw x TimerDelayTimebaseAbs = TimerDelayAbs.
  • TimerSelector: Sets which timer to configure. Because Basler cameras support only one timer, this parameter is preset and can't be changed.

Specifics

Camera Model Default Maximum Value for Timer Duration and Delay Available Trigger Source Events Additional Parameters
acA2500-20gm 16 777 215 Exposure Start TimerSelector
// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Specify that the timer signal is output on Line 2
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.TimerActive);
// Specify that the timer starts when exposure starts
camera.Parameters[PLCamera.TimerTriggerSource].SetValue(PLCamera.TimerTriggerSource.ExposureStart);
// Set the timer duration to 1000 microseconds
camera.Parameters[PLCamera.TimerDurationAbs].SetValue(1000);
// Set the timer delay to 500 microseconds
camera.Parameters[PLCamera.TimerDelayAbs].SetValue(500);

Timestamp

The Timestamp camera feature counts the number of ticks generated by the camera's internal device clock.

The timestamp value is used by several camera features, e.g., Chunk Features and Event Notification.

How It Works

As soon as the camera is powered on, it starts generating and counting clock ticks. The counter is reset to 0 whenever the camera is powered off and on again. On some camera models, you can also reset the counter during camera operation.

The number of ticks per second, i.e., the tick frequency, depends on your camera model.

The timestamp counter is also used to synchronize multiple cameras via PTP. On cameras synchronized via PTP, the timestamp value will be (nearly) identical.

Determining the Current Timestamp Value

To determine the current value of the timestamp counter:

  1. Execute the GevTimestampControlLatch command.

    A "snapshot" of the camera’s current timestamp value is taken.

  2. Get the value of the GevTimestampValue parameter.

    The value of the parameter refers to the point in time when the GevTimestampControlLatch command was executed.

There is an unspecified and variable delay between sending the GevTimestampControlLatch command and it becoming effective.

Specifics

Camera Model

Timestamp Tick Frequency

Counter Can Be Reset during Camera Operation

All ace GigE camera models 125 MHz (= 125 000 000  ticks per second, 1 tick = 8 ns) or 1 GHz (= 1 000 000 000 ticks per second, 1 tick = 1 ns)a Yes. To reset the counter, make sure that PTP (if available) is disabled and execute the GevTimestampControlReset command.

Show all camera models

aDepends on the camera configuration, e.g., on whether PTP is enabled or not. To determine the current tick frequency, get the value of the GevTimestampTickFrequency parameter.

// Take a "snapshot" of the camera's current timestamp value
camera.Parameters[PLCamera.GevTimestampControlLatch].Execute();
// Get the timestamp value
Int64 i = camera.Parameters[PLCamera.GevTimestampValue].GetValue();

Trigger Activation

This feature is only available with hardware triggering.

To set the trigger activation mode:

  1. Set the TriggerActivation parameter to one of the following values:
    • RisingEdge: The trigger becomes active when the trigger signal rises, i.e., when the signal status changes from low to high.
    • FallingEdge: The trigger becomes active when the trigger signal falls,  i.e., when the signal status changes from high to low.
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the trigger activation mode to rising edge
camera.Parameters[PLCamera.TriggerActivation].SetValue(PLCamera.TriggerActivation.RisingEdge);

Trigger Delay

To add a trigger delay:

  1. Set the TriggerSelector parameter to the desired trigger type, e.g., FrameStart.
  2. Set the TriggerDelayAbs parameter to the desired delay (in µs).

    The minimum value is 0 μs (no delay). The maximum value is 1,000,000 μs (1 s).

// Select the frame start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the delay for the frame start trigger to 300 µs
camera.Parameters[PLCamera.TriggerDelayAbs].SetValue(300);

Trigger Mode

To set the trigger mode:

  1. Set the TriggerSelector parameter to the desired trigger type, e.g., FrameStart.
  2. Set the TriggerMode parameter to one of the following values:
    • On: Enables triggered image acquisition for the trigger type selected.
    • Off: Disables triggered image acquisition for the trigger type selected. Trigger signals are generated automatically by the camera.

By default, the trigger mode is set to Off for all trigger types. This means that free run image acquisition is enabled.

Immediate Trigger Mode

On some camera models, the Immediate Trigger Mode is available.

When the Immediate Trigger Mode is enabled, exposure starts immediately after triggering, but changes to image parameters become effective with a short delay, i.e., after one or more images have been acquired. This is useful if you want to minimize the exposure start delay, i.e., if you want to start image acquisition as soon as possible, and if your imaging conditions are stable.

To enable the Immediate Trigger Mode, set the BslImmediateTriggerMode parameter to On.

The setting takes effect whenever the TriggerMode parameter is set to On.

External links

  • Trigger functions (Vision Doctor)

Specifics

Camera Model Immediate Trigger Mode
All ace GigE camera models Not available
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Enable triggered image acquisition for the Frame Start trigger
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);

Trigger Selector

Selecting a Trigger Type

To select a trigger type, set the TriggerSelector parameter to one of the following values:

  • FrameStart
  • AcquisitionStart (if available)

Once you have selected a trigger type, you can do the following:

Task Feature
Enabling or disabling triggered image acquisition for the selected trigger type Trigger Mode

Enabling hardware or software triggering for the selected trigger type

Trigger Source
Selecting the input line or software command to act as the source for the trigger type selected Trigger Source
Selecting the signal transition necessary for enabling the trigger type selected (falling edge or rising edge) Trigger Activation

Configuring a delay between the receipt of a hardware signal and the moment when the trigger type selected becomes effective

Trigger Delay

Available Trigger Types

Frame Start Trigger

The Frame Start trigger is used to start the acquisition of a single image. Every time the camera receives a Frame Start trigger signal, the camera starts the acquisition of exactly one image.

In free run acquisition mode, which is enabled by default, Frame Start trigger signals are generated automatically by the camera.

This is the trigger type used most commonly. In most image applications, you will only need to configure this type.

Frame Burst Start Trigger (= Acquisition Start Trigger)

If available, you can use the Frame Burst Start trigger to start the acquisition of a series of images (a "burst" of images). Every time the camera successfully receives a Frame Burst Start trigger signal, the camera starts the acquisition of a series of images. The number of images per series is specified by the AcquisitionFrameCount parameter. 

Using the Frame Burst Start Trigger

Use Case 1: Frame Burst Start Trigger On, Frame Start Trigger Off

One way to use the Frame Burst Start trigger is to enable the Frame Burst Start trigger and to disable the Frame Start trigger.

This way, every time the camera successfully receives a Frame Burst Start trigger signal, the camera automatically acquires a complete series of images. The number of images per series is specified by the AcquisitionFrameCount parameter.

For example, if  the AcquisitionFrameCount parameter is set to 3, the camera automatically acquires 3 images.

Afterwards, the camera waits for the next Frame Burst Start trigger signal. On the next trigger signal, the camera acquires another 3 images, and so on.

Use Case 2: Frame Burst Start Trigger On, Frame Start Trigger On

Another way to use the Frame Burst Start trigger is to enable both the Frame Burst Start trigger and the Frame Start trigger.

This way, every time the camera successfully receives a Frame Burst Start trigger signal, the camera does not  automatically acquire images. Instead, the camera waits for Frame Start trigger signals. You can now apply Frame Start trigger signals to acquire all images of the series one by one. For example, if the AcquisitionFrameCount parameter is set to 3, you can apply 3 Frame Start trigger signals one after the other.

When the number of images per series (e.g., 3 images) has been reached, the camera ignores all further Frame Start trigger signals. You must apply a new frame burst trigger signal to start the next series of images.

If you want to trigger both trigger types via hardware signals, you must assign different hardware trigger sources to the Frame Burst Start Trigger and the Frame Start Trigger, e.g., Line1 and Line3.

Specifics

Camera Model

Available Trigger Types

Maximum Number of Images per Series

acA2500-20gm
  • Frame Start
  • Acquisition Start
255
// Select and enable the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Select and enable the Acquisition Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.AcquisitionStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the number of images to be acquired per Acquisition Start trigger signal to 3
camera.Parameters[PLCamera.AcquisitionFrameCount].SetValue(3);

Trigger Software

To trigger the camera by executing a software command:

  1. Set the TriggerSelector parameter to the desired trigger type, e.g., FrameStart.
  2. Set the TriggerMode parameter to On.
  3. Set the TriggerSource parameter to Software.
  4. Execute the TriggerSoftware command.
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Enable triggered image acquisition for the Frame Start trigger
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the trigger source for the Frame Start trigger to Software
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Software);
// Generate a software trigger signal
camera.Parameters[PLCamera.TriggerSoftware].Execute();

Trigger Source

Configuring a Hardware Trigger Source

If a hardware trigger source is available on your camera model, you can set it as the source for a trigger. To do so:

  1. Set the TriggerSelector parameter to the desired trigger type, e.g., FrameStart.
  2. Set the TriggerSource parameter to one of the following values:
  • Line1, Line2, Line3, Line4: If available, the trigger selected can be triggered by applying an electrical signal to I/O line 1, 2, 3, or 4. 
  • If the I/O line is a GPIO line, the line must be configured for input.

Configuring a Software Trigger Source

  1. Set the TriggerSelector parameter to the desired trigger type, e.g., FrameStart.
  2. Set the TriggerSource parameter to one of the following values:
  • Software: The trigger selected can be triggered by executing a TriggerSoftwarecommand.
  • Softwaresignal1, SoftwareSignal2, SoftwareSignal3: If available, the trigger selected can be triggered using the Software Signal Pulse feature.
  • Action1: If available, the trigger selected can be triggered using the Action Commands feature.

Specifics

Camera Model

Available Hardware Trigger Sources

Available Software Trigger Sources

acA2500-20gm
  • Line 1
  • Line 3
  • Software
  • Action 1
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the trigger source to Line 1
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Line1);

User-Defined Values

The User-Defined Values camera feature allows you to store user-defined values in the camera.

How It Works

The camera can store up to five user-defined values(named Value1 to Value5). These can be values that you may require for your application (e.g., optical parameter values for panoramic images). These values are 32-bit signed integer values that you can set and get as desired. They serve as storage locations  and have no impact on the operation of the camera.

Configuring User-Defined Values

  1. Set the UserDefinedValueSelector parameter to the desired user-defined value (Value1 to Value5).
  2. Enter the desired value for the UserDefinedValue parameter.
// Selct user-defined value 1
camera.Parameters[PLCamera.UserDefinedValueSelector].SetValue(PLCamera.UserDefinedValueSelector.Value1);
camera.Parameters[PLCamera.UserDefinedValue].SetValue(1000);
// Get the value of user-defined value 1
camera.Parameters[PLCamera.UserDefinedValueSelector].SetValue(PLCamera.UserDefinedValueSelector.Value1);
Int64 UserValue1 = camera.Parameters[PLCamera.UserDefinedValue].GetValue();

User Output Value

The User Output Value camera feature allows you to set the status of an output line to high (1) or low (0) by software.

This can be useful to control external events or devices, e.g., a light source.

Prerequisites

The line source of the desired output line must be set to a User Output signal.

Setting the Output Line Status

How to set the output line status depends on how many User Output line sources are available on your camera model.

One User Output line source is available ("User Output"):

  1. If you want to set the line status to high (1), set the UserOutputValue parameter to true.
  2. If you want to set the line status to low  (0), set the UserOutputValue parameter to false.

Multiple User Output line sources are available (e.g., "User Output 1", "User Output 2"):

  1. Set the UserOutputSelector parameter to the corresponding line source.

    Example: Assume that you have set the line source of Line 2 to UserOutput1. To configure the line status of Line 2, you must set the UserOutputSelector parameter to UserOutput1.

  2. If you want to set the line status to high (1), set the UserOutputValue parameter to true.
  3. If you want to set the line status to low (0), set the UserOutputValue parameter to false.

 

// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Set the source signal to User Output 1
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.UserOutput1);
// Select the User Output 1 signal
camera.Parameters[PLCamera.UserOutputSelector].SetValue(PLCamera.UserOutputSelector.UserOutput1);
// Set the User Output Value for the User Output 1 signal to true.
// Because User Output 1 is set as the source signal for Line 2,
// the status of Line 2 is set to high.
camera.Parameters[PLCamera.UserOutputValue].SetValue(true);

User Output Value All

The User Output Value All camera feature allows you to configure the status of all output lines in a single operation.

This can be useful to control external events or devices, e.g., a light source.

Configuring the Status of All Output Lines

You can configure the status of all output lines with the UserOutputValueAll parameter. The parameter is reported as a 64-bit value.

Certain bits in the value are associated with the output lines. Each bit configures the status of its associated line:

  • If a bit is set to 0, the status of the associated line is set to low.
  • If a bit is set to 1, the status of the associated line is set to high.

Specifics

Camera Model

Bit-to-Line Association

acA2500-20gm
  • Bit 0 is always 0
  • Bit 1 configures the status of Line 2
  • Bit 2 configures the status of Line 3
Example: All lines high = 0b110
// Set the status of all output values in a single operation
// Assume the camera has two output lines and you want to set both to high
// 0b110 (binary) = 6 (decimal)
camera.Parameters[PLCamera.UserOutputValueAll].SetValue(6);

User Sets

The User Sets camera feature allows you to save or load camera settings. You can also specify which settings will be loaded at camera startup.

A user set (also called "configuration set") is a group of parameter values. It contains all parameter settings needed to control the camera, with a few exceptions.

Some user sets are preset and read-only. These user sets are also called "factory sets".

What's in a User Set?

Each user set includes the values of all camera parameters, with the following exceptions:

  • Parameter values related to the following camera features are not included:
    • Action Commands
    • LUT
    • Sequencer
    • User-Defined Values
    • Precision Time Protocol
  • Values of parameters that include the term "Selector" in their names are not included (e.g., GainSelector). Exceptions: TestImageSelector, GammaSelector, LightSourceSelector.
  • The value of the DeviceUserID parameter is not included.
  • The value of the GevGVSPExtendedIDMode parameter is not included.
  • Several other parameters related to the transport layer are not included.

This means that when you load or save a user set, the values of all camera parameters will be loaded or saved, except for the parameters listed above.

Loading a User Set

  • When a set is loaded, it overwrites the current camera settings.
  • Loading a user set is only possible when the camera is idle, i.e., not acquiring images.
  1. Set the UserSetSelector parameter to one of the available user sets, e.g., UserSet1.
  2. Execute the UserSetLoad command.

Saving a User Set

  • Only the UserSet1UserSet2, and UserSet3 user sets can be saved. The other user sets are read-only.
  • Saving a user set is only possible when the camera is idle, i.e., not acquiring images.
  1. Set the UserSetSelector parameter to one of the available user sets, e.g., UserSet1.
  2. Execute the UserSetSave command.

Designating the Startup Set

Designating a startup set is only possible when the camera is idle, i.e., not acquiring images.

The user set that you designate as the startup set will be loaded whenever the camera is powered on.

To designate the startup set, set the UserSetDefaultSelector parameter to one of the available user sets, e.g., UserSet1.

Available User Sets

The Default user set is a read-only factory set.

Loading this set configures the camera to provide good camera performance in many common applications and under average conditions. The Default user set contains the initial parameter values that the camera is shipped with, i.e., the factory default settings.

The HighGain user set is a read-only factory set.

Loading this set increases the gain by 6 dB.

The HighGain user set contains the same parameter values as the Default user set, with the following exceptions:

  • If available, the Gain parameter is set to a value that increases the gain by 6 dB compared to the Default user set. The actual parameter value varies by camera model.
  • If available, the GainRaw parameter is set to a value that increases the gain by 6 dB compared to the Default user set. The actual parameter value varies by camera model.

The AutoFunctions user set is a read-only factory set.

Loading this user set enables the camera's Exposure Auto and Gain Auto auto functions.

The AutoFunctions user set contains the same parameter values as the Default user set, with the following exceptions:

  • The GainAuto parameter is set to Continuous.
  • The ExposureAuto parameter is set to Continuous.
  • The AutoFunctionProfile parameter is set to GainMinimum.

User Set 1, User Set 2, and User Set 3

You can use the UserSet1, UserSet2, and UserSet3 user sets to load and save your own camera settings.

By default, these user sets contain the same parameter values as the Default user set. However, you can overwrite the values with your own settings.

Specifics

Camera Model Available User Sets
acA2500-20gm
  • User Set 1
  • User Set 2
  • User Set 3
  • Default
  • High Gain
  • Auto Functions
// Load the High Gain user set
camera.Parameters[PLCamera.UserSetSelector].SetValue(PLCamera.UserSetSelector.HighGain);
camera.Parameters[PLCamera.UserSetLoad].Execute();
// Load the User Set 1 user set
camera.Parameters[PLCamera.UserSetSelector].SetValue(PLCamera.UserSetSelector.UserSet1);
camera.Parameters[PLCamera.UserSetLoad].Execute();
// Adjust some camera settings
camera.Parameters[PLCamera.Width].SetValue(600);
camera.Parameters[PLCamera.Height].SetValue(400);
camera.Parameters[PLCamera.ExposureTime].SetValue(3500.0);
// Save the settings in User Set 1
camera.Parameters[PLCamera.UserSetSave].SetValue(PLCamera.UserSetSelector.UserSet1);
camera.Parameters[PLCamera.UserSetSave].Execute();
// Designate User Set 1 as the startup set
camera.Parameters[PLCamera.UserSetDefault].SetValue(PLCamera.UserSetDefault.UserSet1);

相关阅读

工业相机(高速相机)与普通相机的差别

  即将转入算法研究,对之前在相机使用和选型等等问题做个总结,先来回答一个在开始就问自己的一个问题:为什么工业相机那么贵?贵在哪

分享到:

栏目导航

推荐阅读

热门阅读