DFRobot HUSKYLENS 2 AI camera review – From built-in AI samples to training a custom model to detect elephants

Hello, today I am going to review the HUSKYLENS 2, released in October 2025. It is the next generation of HUSKYLENS, an AI vision sensor equipped with a Kendryte K230 dual-core RISC-V SoC with a 6 TOPS AI accelerator and a 2.4-inch IPS touchscreen. The device runs machine vision algorithms fully on-device, providing fast and low-latency performance, and includes more than 15 built-in AI models.

HUSKYLENS 2 also supports deploying custom-trained models, including integration with Large Language Models (LLMs) via a Model Context Protocol (MCP) service. In addition, it is compatible with various microcontrollers, such as Arduino and Raspberry Pi, through UART or I2C communication interfaces.

The HUSKYLENS 2
The HUSKYLENS 2

HUSKYLENS 2 unboxing

The manufacturer sent the HUSKYLENS 2 module and the Microscope Lens separately. Both parcels were shipped from Chengdu, China, and arrived at my office in Chanthaburi, Thailand, in about one week. The parcels were packed in standard cardboard boxes and arrived without any damage.

The following is the list of components received in the first shipment.

  • HUSKYLENS 2 AI Vision Sensor
  • Metal Accessory Kit
  • Power Adapter Board
  • Gravity-4P Sensor Connector Cable (30cm)
  • Dual-Plug PH2.0-4P Silicone Cable (20cm)
  • Product Qualification Card
  • HUSKYLENS 2 Wi-Fi Module

The following is the complete list of components received in the second shipment.

  • HUSKYLENS 2 Microscope Module (30x Mag)
  • Screw driver

 

Components received in the first delivery.
Components received in the first delivery.
Main components received from both deliveries: (left) the HUSKYLENS 2, (middle) the Wi-Fi Module, and (right) the Microscope Lens.
Main components received from both deliveries: (left) the HUSKYLENS 2, (middle) the Wi-Fi Module, and (right) the Microscope Lens.

Checking HUSKYLENS 2 system information

The front of the HUSKYLENS 2 houses the default lens module, one RGB LED, two white fill-light LEDs on each side of the casing, a speaker, and a microphone. On the top, there is a button named Button-A which can be used for capturing an image, recording a video, or capturing a screenshot. The left side includes a microSD card slot. At the bottom of the HUSKYLENS 2, there are two main communication connectors. The USB-C port is used for power and mass-storage access, while the 4-pin JST-style connector, labeled “Gravity,” provides I²C and UART interfaces.

Components on the front side.
Components on the front side.
Gravity UART/I²C and USB-C interfaces at the bottom of the device.
Gravity UART/I²C and USB-C interfaces at the bottom of the device

We’ll often refer to the official documentation/wiki in this review. There are two main ways to power the HUSKYLENS 2. It can be powered directly using a 5V supply via the USB-C port, for example, from a computer or a power adapter. However, when using the HUSKYLENS 2 together with an external MCU, the manufacturer recommends using the power adapter board to ensure sufficient and stable power for both the HUSKYLENS 2 and the MCU. The following image shows an example of powering the HUSKYLENS 2 using the provided Power Adapter Board. In this setup, my PC is connected to the HUSKYLENS 2 through the USB-C port on the adapter board, while the ESP32 module requires a separate USB connection, which was not connected in this case.

Power the HUSKYLENS 2 using the Power Adapter Board.
Power the HUSKYLENS 2 using the Power Adapter Board.

For the initial quick test, I followed the Quick Connection using USB-C instructions by simply connecting the HUSKYLENS 2 to a PC with a USB-C cable. Since the device does not have a power button, it powered on automatically once the cable was connected. The logo briefly appeared on the 2.4″ IPS touchscreen, and the device was ready to use in less than 10 seconds. The IPS touchscreen worked well, felt smooth and responsive. All navigation gestures, including swiping and tapping, functioned as expected.

With the default firmware, the device is detected as a single storage device with several default directories, as shown in the following images.

HUSKYLENS 2 detected as a storage device.
HUSKYLENS 2 detected as a storage device.
Default directories within the detected storage device.
Default directories within the detected storage device.

The default UI consists of four menu pages, which mainly display the available models, while the last page is dedicated to system settings and custom model installation.

Default icons in firmware v1.1.5.
Default icons in firmware v1.1.5.

I then opened the System Settings menu and selected Device Information to check the hardware and system details. My unit is hardware version 1.0.0 and comes pre-installed with firmware v1.1.5. Three languages are available: English, Simplified Chinese, and Traditional Chinese.

On first use, approximately 72 MB of RAM was used out of a total of 970 MB. The internal storage capacity is 7,045 MB, with the factory default installation occupying about 935 MB. Under idle conditions and without running any AI model, the Device Information page showed that the device temperature was around 36.4°C, while the ambient room temperature was approximately 30°C.

Device information.
Device information.

Testing some built-in AI models

With the default firmware v1.1.5, there are 16 modes available on my HUSKYLENS 2, as listed below.

  • Face Recognition
  • Object Recognition
  • Object Tracking
  • Color Recognition
  • Object Classification
  • Self-Learning Classifier
  • Instance Segmentation
  • Hand Recognition
  • Pose Recognition
  • License Recognition
  • Optical Character Recognition
  • Line Tracking
  • Face Emotion Recognition
  • QR Code Recognition
  • Barcode Recognition

After upgrading to firmware v1.2.1, the number of available modes increased to 19, with the newly added modes listed below.

  • Eye Gaze
  • Orientation Detection
  • Fall Detection

Here, I made a short overview video to showcase some of the AI models available on the HUSKYLENS 2 and to give a quick look at its AI performance using free video footage found on YouTube. Please note that all tests shown in this video were performed using firmware v1.2.1, while the remaining results in this review are based on both firmware versions 1.1.5 and 1.2.1.

Face Recognition

I started my first test with the Face Recognition mode. After tapping the icon, the HUSKYLENS 2 switched to a real-time camera preview and immediately began face detection. When faces were detected, the device correctly drew face bounding boxes and five facial landmarks on the preview. I also tested this mode using images and videos displayed on a monitor, and it continued to work as expected.

Briefly, each AI model includes an additional submenu at the bottom of the screen that allows users to control the model’s behavior. In the case of the Face Recognition mode, the following submenus are available.

  • Forget ID: Forget all previously learned face IDs.
  • Multi-Face Acceleration: Attempts to increase the display frame rate when three or more faces are shown simultaneously, but may reduce recognition accuracy.
  • Detect Threshold: Sensitivity of face detection.
  • Recognize Threshold: Strictness of face recognition. Lower values are more prone to false positives, while higher values reduce false positives.
  • NMS Threshold: Non-maximum suppression setting. Lower values are best for clear single-object scenes, while higher values work better for dense, occluded, or multiple-object scenes.
  • Face Features: Toggle the display of facial key points.
  • Set Name: Assign names to learned faces in either English or Chinese.
  • Show Name: Toggle the visibility of the recognized face name.
  • Reset Default: Restore all settings to their default values.
  • Import Model: Import model settings. Each model consists of two files with .json and .bin extensions, where the number in the filename corresponds to the model ID.
  • Export Model: Export the current model settings.

I then tested the detection threshold by lowering and increasing the slider, and the HUSKYLENS 2 responded appropriately. The NMS threshold adjustment also worked as expected. The following image shows the results of the face recognition model applied to AI-generated facial images displayed on my LCD monitor.

Test face recognition.
Test face recognition.

Hand Recognition

Next, I tested the Hand Recognition mode, which also performed well. It can detect palms in an image and correctly identify all 21 key-points, as advertised, including the wrist and joints for each finger. This mode also provides submenus to control the model’s behavior, such as adjusting the detection and recognition thresholds. The following image shows the detected key-points using a human hands image from Wikipedia.

Test hand recognition.
Test hand recognition.

Pose Recognition

I also tested the Pose Recognition mode, which can detect the human body and extract body key-points. It worked well, and the detection and rendering performance with multiple people was good. The body bounding boxes and all 17 key-points were detected and displayed correctly. The following image shows the result of pose recognition using a pedestrian image from Wikipedia.

Test pose recognition.
Test pose recognition.

Object Recognition

Next, I tested the Object Recognition mode. DFRobot says this model can identify more than 80 types of objects. I tested its recognition using common objects in my office, such as a person, chair, coffee cup, and smartphone, and it worked very well. The following image shows an example of the Object Recognition mode tested using the same pedestrian image.

Test object recognition.
Test object recognition.

Orientation Detection

Another interesting built-in AI model is the Orientation Detection mode. This model can detect a face and recognize the direction it is facing. Unfortunately, at the time of this review, no additional detailed information about this model was available on the official website. The following image shows the estimated face orientations using images of a French actress from Google Image search results. Each detected face is overlaid with rotation information, which I assume represents the yaw, pitch, and roll angles.

Test orientation detection.
Test orientation detection.

Object Tracking and Self-Learning Classifier

Lastly, I tested the Object Tracking mode, which enables learning and tracking of a target object. Please note that the HUSKYLENS 2 is currently limited to tracking one object at a time. In addition, there is a Self-Learning Classifier mode, which is capable of capturing, learning, and recognizing custom objects. The process of using the Self-Learning Classifier is very similar to that of the Object Tracking mode.

To perform object tracking, I first drew a bounding box around the target object. The device then displayed a bounding box with the object ID and confidence score, for example, Obj: ID1 80%, indicating the first learned object with an 80% confidence level. After this step, the device was able to continuously track the object successfully. The following images show direct screenshots of the Object Tracking results. I tested this mode using multiple TAMIYA scale model bottles, with the green bottle selected as the target object. I then randomly moved the camera so that the target object left the field of view and later returned to the scene; the HUSKYLENS 2 was still able to re-detect the target object correctly. In addition, the object could be tracked from multiple viewing angles without issues, as long as the angle difference was not too large.

Object tracking test from multiple angles.
Object tracking test from multiple angles.
Track the target object from another viewing angle.
Track the target object from another viewing angle.

Testing programming environment

According to the official documentation, Arduino, UNIHIKER K10, UNIHIKER M10, micro:bit, and Raspberry Pi are listed as supported devices. ESP32 is mentioned briefly at the beginning of the document, but it is not actually included in the compatibility list. Therefore, I decided to test it myself to see whether the ESP32 works or not.

I followed the instructions recommending the use of the power adapter board for power distribution. I connected the HUSKYLENS 2 to the power adapter board using the 4-pin cable. Then, I connected another 4-pin cable to the connector labeled “Gravity” and wired it to the ESP32’s 3.3 V and GND pins, while the ESP32 itself was powered through a separate USB-C connection from my computer.

HUSKYLENS 2 supports multiple programming IDEs, including Arduino IDE, Mind+, and Python IDLE. In this review, I tested programming using the Arduino IDE by installing the latest DFRobot HUSKYLENS 2 library from their GitHub repository.

I started with a quick I2C scan, which detected the device at address 0x50.

Check I2C address with I2C scan example code.
Check the I2C address with the I2C scan example code.

Then, I tried the Face Recognition Output Data example from the official website, and it worked without any issues. My PC was able to receive and extract basic information sent from the HUSKYLENS 2, including the face ID, name, and bounding box center, as shown in the following image.

Retrieve basic information from face recognition results.
Retrieve basic information from face recognition results.

Deploying a custom-trained model

A custom model can be trained either using the Mind+ Server or Python. In both approaches, the trained model is first converted to ONNX format and then further converted into the HUSKYLENS 2 custom model format. In this review, I did not test the Mind+ Server method and directly used the Python-based approach instead.

Creating a custom model for the HUSKYLENS 2 requires several steps, but the process is quite straightforward. The process is basically convert PyTorch model (.pt) to ONNX, and then use the official tool to export it into their custom model format. According to the wiki, HUSKYLENS 2 currently supports models trained from YOLOv8n-based architectures, and only at 320×320 or 640×640 input sizes.

I prepared the programming environment by creating a new Conda environment with Python 3.12 as recommended. Then I installed Ultralytics, which provides the YOLOv8 tools needed for training and exporting the model. During this setup, I noticed that the versions of onnx and onnxslim that got installed by default didn’t match the required ranges (onnx ≥ 1.12.0 and ≤ 1.19.1, and onnxslim ≥ 0.1.71). Because of this mismatch, the export process failed at first. I had to manually reinstall the correct versions before the export tool worked properly.

Here in Chanthaburi, Thailand, we have serious issues with wild elephants entering farmland and residential areas, which often leads to property damage and sometimes even risks to human life. Because of that, I’m interested in exploring whether the HUSKYLENS 2 could be useful in a future early-warning or monitoring project. For testing, I downloaded the African Wildlife dataset from Kaggle, which is also available through Ultralytics.

Example images from the African Wildlife dataset.
Example images of water buffaloes from the African Wildlife dataset.

I then trained the model at a resolution of 320 × 320 for a quick test run, using only 10 epochs.


After that, I converted the trained model to ONNX format using the following command line.


Convert .pt file info .onnx file format.
Convert .pt file info .onnx file format.

Next, I downloaded the ONNX to HuskyLens 2 Installation Package GUI Tool from GitHub and installed all the required dependencies. This tool also needs .NET 7, so I installed it from Microsoft as well. After everything was set up, I copied the images, the .yaml file, and the ONNX model into the target folder: Custom_Model/application. From there, I ran python app.py, filled in all the required parameters, as shown in the following image, and clicked Convert and Package.

The ONNX to HuskyLens 2 Installation Package GUI Tool.
The ONNX to HuskyLens 2 Installation Package GUI Tool.

After waiting a few minutes, the tool generated the file dfrobot_wildlife_detection.41c5.zip. This ZIP file is my custom model package. To install it on the HuskyLens 2, I simply copied it into the device’s installation_package directory.

Custom model packaged in ZIP format.
Custom model packaged in ZIP format.

After that, I opened the Model Installation in the HUSKYLENS 2 menu and used the Local Install option on the touchscreen menu to add the new model.

Installing a custom model from a local directory.
Installing a custom model from a local directory.

Once installed, the new menu item appeared immediately, and the model worked well. The following two images show screenshots of the performance of this 10-epoch custom model. Please note that although the input image was blurry (due to improper positioning of the device and the monitor, limited by my desk space), the HUSKYLENS 2 was still able to detect some of the elephants correctly. Also, since this custom model was trained for only 10 epochs, increasing the number of epochs should result in better performance.

Test the custom-trained model.
Test the custom-trained model.

Installing the HUSKYLENS 2 Wi-Fi Module

By default, the HUSKYLENS 2 is not equipped with a Wi-Fi module, so the HUSKYLENS 2 Wi-Fi Module is required for wireless communication. This module supports Wi-Fi 6 (2.4 GHz) with a maximum data rate of 286.8 Mbps with 20/40 MHz bandwidth. More details are available on the official product page.

To install the module, I removed the four screws and opened the front casing. Due to the thermal grease, additional force was required to separate the casing. I then inserted the Wi-Fi module into the slot. The fit was quite tight, and some force was needed before it seated firmly in place. After that, I closed and re-screwed the front casing.

Remove the screws.
Remove the screws.
Remove the front casing.
Remove the front casing.
Install the Wi-Fi module.
Install the Wi-Fi module.

After powering on the device, I opened the System Settings and configured the Wi-Fi connection by entering my SSID and password. The device connected within approximately 5–10 seconds and obtained an IP address. A Wi-Fi icon appeared in the top-right corner of the screen, indicating that the wireless connection was successfully established and ready for use.

Available SSIDs.
Available SSIDs.
Successfully connected to the target SSID.
Successfully connected to the target SSID.

Installing the HS Microscope Lens

The manufacturer of HUSKYLENS 2 allows users to replace the default lens with the HUSKYLENS 2 Microscope Lens Module, offering up to 30x magnification. This module delivers 2 MP image resolution using the GC2093 sensor, with a spatial resolution of 161 lp/mm, enabling it to resolve line widths down to approximately 3 µm (USAF 1951, Group 7, Element 3).

HUSKYLENS 2 Microscope Lens Module (30x Mag)
HUSKYLENS 2 Microscope Lens Module (30x Mag)
Connector of the HUSKYLENS 2 Microscope Lens Module.
Connector of the HUSKYLENS 2 Microscope Lens Module.

The module can be installed by unscrewing the two screws beside the default camera lens module and removing the original camera module. Next, align the camera connector with the HUSKYLENS 2 socket and apply light pressure to ensure a firm connection. I followed these steps, and after powering on the device, it correctly displayed images from the new lens.

Remove screws.
Remove screws.
Install the Microscope Lens Module.
Install the Microscope Lens Module.
The Microscope Lens Module is installed
The Microscope Lens Module is installed

I am not sure whether it truly achieves 30× magnification or not, but it works well, as shown in the following image, which captures a magnified view of an LCD monitor displaying a white image. The red, green, and blue sub-pixel layout can be clearly seen. Please note that the image displayed on the HUSKYLENS 2 screen appears slightly blurry because the module was shaking slightly during capture.

Test the Microscopic Lens Module.
Test the Microscope Lens Module.

Upgrading firmware

My default firmware was v1.1.5, and it worked very well; however, the MCP Server is not available on this version. To test the MCP Server functionality, Wi-Fi support and firmware v1.1.6 or later are required. Therefore, I upgraded from v1.1.5 to the latest firmware available on GitHub (v1.2.1 at the time of this review) and followed the instructions provided in the official documentation.

First, I powered off the HUSKYLENS, held down Button-A, and then powered it on again. I waited for at least 2 seconds before releasing the button. With this boot sequence, the device appeared as a K230 USB Boot Device, instead of being detected as HUSKYLENS.

K230 USB boot device detected.
K230 USB boot device detected.

After that, I installed the required driver using Zadig and waited about 30 seconds for the installation to complete.

Run the Zadig tool.
Run the Zadig tool.

Next, I launched K230BurningTool (release 2025-05-07 02:58), browsed for the downloaded firmware image file, and pressed the Start button to begin the flashing process. The firmware update took approximately one minute to complete.

Burn the new firmware with K230BurningTool.
Burn the new firmware with K230BurningTool.

After restarting the HUSKYLENS 2, I noticed that the device was detected with additional SD card storage, rather than only the internal HUSKYLENS 2 mass storage, as in firmware v1.1.5. I also tested upgrading and downgrading between v1.2.1 and v1.1.6 several times, and the process worked reliably without any issues.

Check system information after the firmware upgrade.
Check system information after the firmware upgrade.

Testing MCP Server

To use the MCP Service, firmware version 1.1.6 or later is required. The service can be enabled using the MCP Service icon. After enabling the MCP server, the default URL for client connection was displayed on the HUSKYLENS 2 screen.

Start MCP Server service.
Start MCP Server service.

 

I then created a Google AI Studio account and generated a new API key. Next, I installed Cherry Studio version 1.7.2. In Cherry Studio, I set the model provider to Gemini and entered the API key obtained earlier. The official documentation describes how to create a new gemini-2.5-flash model; however, this model was already available in my Cherry Studio installation. Therefore, I simply used the existing default gemini-2.5 model. The connection test was successful.

Enable Gemini in Cherry studio.
Enable Gemini in Cherry studio.
Select Gemini model.
Select Gemini model.

After that, I created a new MCP server connection by setting the connection type to Server-Sent Events (SSE) and configuring the URL exactly as shown on the HUSKYLENS 2 screen. Once enabled, four new tools appeared, indicating that the connection to the HUSKYLENS 2 was successful. These four tools correspond to the MCP tools listed below:

  • manage_applications
  • multimedia_control
  • get_recognition_result
  • task_scheduler

These tools can be used to check the currently running algorithm, switch between models, take photos, and query AI recognition results. Please note that these tools currently provide only basic functionality and are still undergoing optimization.

Set MCP server URL.
Set MCP server URL.
Test connection to the HUSKYLENS 2 MCP server.
Test connection to the HUSKYLENS 2 MCP server.

After restarting, I switched to Chat mode, selected Gemini 2.5 Flash as the Gemini model, and chose my HUSKYLENS 2 MCP server in the tool settings. I then tested the chat functionality using the extract command shown in the tutorial documentation: What models/algorithms are currently available?, but encountered errors.

Encountered errors.
Encountered errors.

It appeared that some parameters were missing or incorrectly transmitted, and I initially suspected a version mismatch. I therefore tried downgrading the firmware to version v1.1.6, as mentioned in the tutorial, but the issue persisted. I then switched the model provider in Cherry Studio from Gemini 2.5 to CherryAI and found that both the Qwen3-8B and GLM-4.5-Flash models worked correctly.

Using these CherryAI models, I was able to chat successfully to query the available AI models and check the currently running modes on the device. I also tested switching among Face Recognition, Hand Recognition, and Instance Segmentation modes, and all of them worked as expected.

Successful chat session with GLM-4.5-Flash.
Successful chat session with GLM-4.5-Flash.
Successful chat session with Qwen3-8B.
Successful chat session with Qwen3-8B.
Chat to check whether the device is running in Hand Recognition mode.
Chat to check whether the device is running in Hand Recognition mode.
Chat to check whether the device is running in Instance Segmentation mode.
Chat to check whether the device is running in Instance Segmentation mode.

However, I could not use other tools, such as taking a photo or scheduling a task, as these tools either failed to respond within the timeout period or could not execute internal commands.

Testing video streaming

To stream video from the HUSKYLENS 2 using a USB cable, the RNDIS driver must be installed. I followed the official instructions and used the default Microsoft USB RNDIS driver available in my Windows 11 system32 directory. Once installed, the USB RNDIS Adapter appeared correctly in the network settings. If RTSP streaming is not available, make sure the firmware is up to date. The HUSKYLENS 2 streams video using the RTSP protocol, which can be enabled via the Video Streaming icon in the system menu. After enabling streaming and starting any AI model, the device begins transmitting video at the default URL: rtsp://192.168.88.1:8554/live

I tested RTSP streaming using Python and OpenCV with the provided RTSP URL. The connection was established successfully, and video frames were streamed correctly. I also tested wireless video streaming by enabling Wi-Fi connectivity and turning on the RTSP and WebRTC streaming options in the Video Streaming menu, and it worked as expected. The following images show the live stream viewed in a web browser and in VLC Media Player.

Enable Wi-Fi connectivity and RTSP streaming.
Enable Wi-Fi connectivity and RTSP streaming.
Display live streaming in a web browser.
Display live streaming in a web browser.
Display live streaming in the VLC media player.
Display live streaming in the VLC media player.

Calibrating camera intrinsics and lens distortion coefficients

I noticed some barrel-type distortion from the HUSKYLENS 2 lens, so I performed a quick camera calibration to estimate the camera intrinsics and distortion coefficients. The camera intrinsics describe the imaging characteristics of the camera, including the focal length and principal point. The lens distortion coefficients represent the parameters of radial and tangential distortions.

I captured a set of 640×480 checkerboard images and processed them using OpenCV’s camera calibration function in Google Colab.

Input checkboard pattern.
Input checkboard pattern.
Detected checkerboard pattern.
Detected checkerboard pattern.

As a result, the following intrinsic parameters were obtained:

  • fx: 700.2057
  • fy: 683.6732
  • cx: 315.1202
  • cy: 225.5184

The resulting camera matrix appears typical for a compact AI vision sensor, with fx = 700.20 and fy = 683.67. Using the standard field-of-view (FOV) formula FOV = 2 × arctan((image_size / (2*focal_length)), the estimated horizontal FOV is approximately 49.12°, and the vertical FOV is approximately 38.69°. The optical center (315.12, 255.51) is slightly shifted but remains close to the image center (320.0, 240.0), which is normal for small, low-cost lenses.

The estimated distortion coefficients are also reasonable for this type of hardware:

  • k1 = 0.0913
  • k2 = −0.5608
  • k3 = 0.4919
  • p1 = 0.00019
  • p2 = 0.00163

The radial coefficients (k1, k2, k3) clearly indicate barrel distortion that becomes stronger toward the edges. Meanwhile, the tangential values (p1, p2) are very small and only show minor off-axis alignment, usually just a result of normal assembly tolerances.

However, this was only a quick calibration test, and a more careful and thorough calibration would be needed for real applications.

Visualize radial distortion.
Visualize radial distortion.
Visualize tangential distortion.
Visualize tangential distortion.

Checking temperature and heat distribution

To test the device’s thermal behavior, I turned on the Wi-Fi, enabled video streaming via WebRTC, and ran the Hand Recognition mode. At the same time, the MCP server was enabled simultaneously. Overall, when the device was operating in full running mode, it felt noticeably hot.

The following images show thermal images of the front of the HUSKYLENS 2 captured using a FLIR E4 thermal camera. The highest temperature observed was around 43.2 °C, with the heat distribution being relatively even across the front casing. On the back side, the highest temperature was approximately 38 °C and was mainly concentrated on the right side, corresponding to the location of the Wi-Fi module on the opposite front side. As shown in the images below, more heat was observed on the front of the HUSKYLENS 2, which could be clearly felt when holding the device in hand.

Heat distribution of the HUSKYLENS 2.
Heat distribution of the HUSKYLENS 2.
Heat distributions of the HUSKYLENS 2.
Heat distributions of the HUSKYLENS 2.

Conclusion

My overall impression of the HUSKYLENS 2 is positive. The AI functionality of the HUSKYLENS 2 works very well. Even though the image and video stream resolutions are relatively low, I am satisfied with the overall image quality. In addition, although I could not test all MCP server capabilities, the embedded MCP Server service provides a very effective and flexible workflow for AI tasks on an embedded device.

I did encounter some minor issues during testing. Besides the device becoming quite hot when running AI models, I experienced several crashes, especially during live streaming. For example, connection losses or the device became unresponsive. In these situations, a power cycle was required to restore normal operation. Another minor issue for me is that double-pressing Button A to capture screenshots is somewhat difficult, and this often causes image or video shaking during recording.

The HUSKYLENS 2, HUSKYLENS 2 Wi-Fi Module, and HUSKYLENS 2 Microscope Camera Module are available for purchase from DFRobot at prices of $74.90, $7.90, and $11.90, respectively. It might also eventually become available on DFRobot’s Amazon store, but right now, only the previous generation HUSKYLENS is listed there.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.

Radxa Orion O6 Armv9 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
0 Comments
oldest
newest
Boardcon MINI1126B-P AI vision system-on-module wit Rockchip RV1126B-P SoC