The vision sensor allows robots to see things. Here are the steps:
Vision Sensor support was added to VEX IQ with firmware version 2.1.1 in November of 2018, so be sure to check if your firmware needs to be updated. Older firmware will not recognize the Vision Sensor. VEX IQ firmware can be found here.
Before the camera can recognize colors, it must be instructed on which colors to look for. First, choose which port you would like the camera to connect to the robot's brain on. Click on the small gear icon opposite the port number you will be plugging the camera into. A Vex IQ device list should show up:
Select the Vision Sensor from the list, but don't close the device list yet.
To configure the camera, we're going to need the horsepower of a big computer so we can see what the camera sees in real time. The IQ Brain isn't fast enough to transfer live video back to our monitor, so we're going to plug the camera into the computer via USB. You can use the same micro USB cord used for the IQ Brain, and you don't have to unplug the IQ Smart Cable from the Vision Sensor. Once the Vision Sensor is plugged into the computer, click on the "Configure Vision Sensor..." button at the bottom left of the device list window. The Vision Sensor config window should appear:
To set a signature, start by pointing the Vision Sensor at a sample of the target color. This is best done in the same lighting that the robot will be operating in. If there is no where good to set the camera down while you work, you can click the Freeze button at the lower right of the image pane to hold a frame stationary so that you can put the camera down while you work. Once you have a stable image to work with, click and drag on the image to create a red rectangle around a swatch of the desired color:
If you selected a discernable chunk of color to build a signature out of, the "Set" buttons will change to green. Click one of the set buttons to save a raw signature based on the selected region to the corresponding signature slot. After doing so, the red box will be cleared, the Set buttons will change back to blue, and your new color signature will be highlighted in the frame wherever it appears, along with some basic information on each identifiable blob of color the camera could find:
After a signature is saved, you can give it a name and adjust how selective the camera should be. To set a name, click on the existing name (s1-7 by default), highlight the old name, and type in a new name. Press Enter or click outside the naming window to save the name. To set the tolerance, click on the bidirectional arrow (↔) to the right of the signature. This will open another box with a tolerance slider in it. Drag the slider to the right to make the Vision Sensor more tolerant - that is, make the Vision Sensor accept more colors similar to the raw color. Drag the slider to the left to make the Vision Sensor less tolerant and more likely to reject colors that are close-to-but-not-quite matching. You will be able to see what the camera considers matching or non-matching in the video feed (or frozen video frame) on the left side. The difference between tolerant and intolerant settings is shown in the following two screen grabs. For comparison, the default tolerance is 3.0.
If you can't get a signature to be reliable by adjusting image area and tolerance selection, you might have to adjust the image brightness. Image brightness affects everything the camera sees and not just one signature or code, so adjust it sparingly if you are trying to configure multiple signatures! Image brightness is configured with the bidirectional arrow (↔) located to the right of the Brightness label.
The Vision Sensor can also be used to look for codes constructed of multiple color signatures. To define a code, there must be at least two signatures defined. Once you have at least two signatures, you can switch to the Codes tab and define codes. To do so, click on one of the white fields with "Enter Code...", then enter your code. Codes are of the format #,#[,#[,#[,#]]], where the #'s are the IDs of signatures from 1-7. A signature's ID is based on its position in the list of signatures. A minimum of two signatures must be in a code, and they can accept up to five. Signatures can repeat in a code, but cannot appear next to each other, so 1,2,1 would be okay but 1,1,2 would not be. After a code is defined, you can give it a name in the gray box. It will also start being highlighted in the video feed if it can identify it. Important to note is that a signature being recognized as part of a code will not show up as a distinct signature object, so use codes wisely.
When you are done configuring the camera, remember to unplug the USB cable from it. If it is also your programming cable, be sure to plug it back into your IQ Brain before attempting to download code.
Once the camera is configured, it's time for the more interesting part: building code to use it.
The bread and butter of vision sensor use is the takeSnapshot method. The most common way of using this method involves providing it with the name of the code or signature that you're interested in:
vision_3.takeSnapshot(sig_REDCUBE);
The takeSnapshot method returns the number of objects seen, from 0 to 4. It will never see more than 4, as 4 is the most the I2C communication channel is fast enough to tell us about. This value can be saved in a variable or used directly in a flow control statement:
int numseen = vision_3.takeSnapshot(sig_TARGETBLOB);
if (vision_3.takeSnapshot(sig_PURPLEWIG) > 0) {
The other effect of takeSnapshot is to populate the objects array with information about what the camera is looking at when takeSnapshot is called. To get new data from the vision sensor, takeSnapshot must be called again:
while (true) {
int numseen = vision_3.takeSnapshot(ccode_SPOOLSTACK);
//other code
}
Objects of this type represent a visual match for a signature or code created by takeSnapshot. The properties are accessed through the dot operator:
int middle_x = vision_3.largestObject.centerX;
vision::object's have the following members:
There are two places that vex::vision::object's are usually found: the largestObject member or the objects member of a vex::vision object. largestObject is a single vision::object, and is accessed through a dot operator after a vex::vision object as seen above:
int middle_x = vision_3.largestObject.centerX;
The objects array is a collection of 0-4 vex::vision::object's, depending on how much takeSnapshot saw. It is accessed through the dot operator like largestObject is, but as it is an array it needs to be indexed with [square brackets] to get a single object:
int middle_x = vision_3.objects[0].centerX;
More examples of accomplishing tasks with a Vision Sensor can be found as examples on Robot Mesh Studio: