SenseCraft AI Overview
SenseCraft AI is an all-in-one platform designed to empower developers and creators in building and deploying AI projects with ease. The website offers a wide range of tools and features to streamline the AI development process, making it accessible to users with varying levels of expertise. In this wiki, we will explore the main sections of the SenseCraft AI website, providing an overview of their key features and functionalities.
Home
The home page of SenseCraft AI serves as the central hub, providing users with an overview of the platform's main functionalities. The navigation bar at the top of the page features five main sections: Home, Pretrained Models, Training, Vision Workspace, and About SenseCraft AI.
The primary focus of the home page is the Start your journey: Deploy a pretrained Model section, which guides users through a step-by-step process to deploy a pre-trained model using Seeed Studio hardware. The process is divided into three main steps:
- Select a pre-trained model from the model repository.
- Deploy and preview the model's results in real-time using the "Deploy and Preview Vision" feature.
- Apply the model to the connected Seeed Studio hardware and view the sensor output.
This feature is particularly useful for users who want to quickly experiment with AI models without having to go through the entire training process themselves.
Moving further down the page, users will find an introduction to the "Training Models" feature. This section categorizes the content related to model training, making it easier for users to find the information they need to train their own AI models using the platform's resources.
Lastly, the home page showcases the "Sharing Vision AI Models" feature, which encourages collaboration and knowledge sharing among the SenseCraft AI community. This feature allows users to share their trained models with others, fostering a sense of community and enabling users to build upon each other's work.
User Account
SenseCraft AI is an open platform that allows users to browse all public AI models and Home pages without logging in. You need to sign up and sign in only when you need to deploy a model, or share your own model.
SenseCraft AI and SenseCraft Data Platform ( original SenseCAP Cloud Platform) are both software services provided by seeed studio for users, users only need to sign up for an account on any one of the platforms, and then they can use the same account to sign in on both platforms.
Sign up
- Enter your name and valid email,and then click get capcha
- Get the verification code from your email and enter it on sign up page
verification code is valid for 10min, please complete the registration within 10 minutes
- Enter your password and other user information to complete your registration.
Sign in
Sign in with your registered email account
Forget password
If you forget your account password, please enter your valid account and get verification code to set a new password.
The validity period of the verification code is 10min, please complete the reset within 10 minutes.
Change password
Visist user account page and click "Change your password" button.
Enter the old password and new password to change password.
Pretrained Models
The Pretrained Models section of the SenseCraft AI website is a comprehensive repository of AI models that users can easily access and deploy on their devices. The model repository currently houses an impressive collection of 30000+ models, with more being added continuously.
Model Categories
To help users find the most suitable model for their needs, the left side of the page displays a categorized list of models. Users can filter the models based on various criteria, such as:
Supported Devices: Users can select models that are compatible with the specific hardware they are using, ensuring seamless integration and optimal performance.
Task: Models are categorized according to the tasks they are designed to perform, such as Detection, Classification, or Segmentation. This allows users to quickly identify models that align with their project requirements.
Publisher: Users can also filter models based on the publisher, making it easy to find models from trusted sources or specific developers.
Model Details
The central area of the Pretrained Models page showcases essential information about each model, including its name, a brief description, and a visual representation. This quick overview helps users get a sense of what each model offers and how it might fit into their projects.
To access more detailed information about a specific model, users can simply click on the model's card. This will take them to a dedicated page for that model, where they can find in-depth descriptions, performance metrics, and step-by-step instructions on how to install and use the model on their devices.
My Own Models
In addition to the public AI models available in the repository, SenseCraft AI also provides a personalized space for users who have trained or uploaded their own models. By logging into their SenseCraft AI account, users can access the "My Own Models" section, where they can find and manage their private models.
Models in the "My Own Models" section are completely private and can only be accessed by the user who created them. However, users have the option to make their models public, allowing others in the SenseCraft AI community to benefit from their work. This feature promotes collaboration and knowledge sharing among users, fostering a vibrant and supportive community of AI enthusiasts.
Training
The Training section of the SenseCraft AI website is designed to help users create customized models tailored to their specific use cases. Currently, the Training page offers two types of training: Classification and Object Detection.
Classification
The Classification training is based on TensorFlow and is fully web-based, eliminating any operating system limitations. This feature allows users to train models using images captured from their local computer's camera or Seeed Studio products. To train a model, users only need to collect 40-50 images per class, without the need for manual labeling. The training process is quick, taking only a few minutes to generate a model. Additionally, the web interface provides real-time preview functionality, enabling users to see the results of their trained model instantly.
Object Detection
The Object Detection training is based on the YOLO-World model and is divided into two sub-sections: Quick Training and Image Collection Training.
- Quick Training: This option allows users to generate a single-class recognition model by simply inputting the object name. As explained on the website, "Based on YOLO - World object detection model, you can quickly generate a single-class recognition model by inputting text."
The Quick Training option under the Object Detection training is powered by the YOLO-World object detection model, which is a state-of-the-art, real-time object detection system. When a user inputs an object name, the system leverages the pre-trained knowledge of the YOLO-World model to generate a single-class recognition model specifically tailored to detect that object.
The YOLO (You Only Look Once) model family is known for its speed and accuracy in object detection tasks. It divides the input image into a grid and predicts bounding boxes and class probabilities for each grid cell. The YOLO-World model, in particular, has been trained on a vast dataset covering a wide range of objects, enabling it to generalize well to various detection tasks.
By building upon the YOLO-World model, the Quick Training option inherits its robust feature extraction and object localization capabilities. The pre-trained model serves as a strong foundation, allowing users to quickly generate a single-class recognition model without the need for extensive training data or computational resources.
However, it is important to acknowledge that the Quick Training option may have limitations in terms of adaptability and precision. As the generated model relies on the pre-existing knowledge of the YOLO-World model, it may not always capture the unique characteristics or variations of the user-specified object. This can lead to reduced accuracy or false detections in certain scenarios.
- Image Collection Training: This option requires users to input the object name and upload relevant images. The website describes this feature as follows: "Based on YOLO - World object detection model, you can customize the training for text and image, which can improve the detection accuracy of the generated model."
The Image Collection Training option in SenseCraft AI allows users to train a custom object detection model using their own dataset, without the need for manual image annotation. This feature is based on the YOLO-World object detection model and utilizes a specialized training approach that eliminates the requirement for bounding box labeling or object segmentation.
The key principle behind this training option is the concept of weakly supervised learning. In weakly supervised learning, the model learns to detect objects using only image-level labels, without the need for precise object localization or bounding box annotations. The YOLO-World model, which serves as the foundation for Image Collection Training, has been designed to leverage this approach effectively.
During the training process, users provide a set of images along with the corresponding object names they want to detect. The model then learns to associate the visual patterns and features present in the images with the provided object names. By exposing the model to a diverse range of images containing the objects of interest, it learns to generalize and detect those objects in new, unseen images.
The YOLO-World model's architecture and training methodology enable it to automatically discover and localize objects within the images, without the need for explicit bounding box annotations. This is achieved through a combination of convolutional neural networks (CNNs) and specialized loss functions that guide the model to focus on the most informative regions of the images.
By eliminating the need for manual image annotation, the Image Collection Training option significantly reduces the effort and time required to create a custom object detection model. Users can simply collect a dataset of images containing the objects they want to detect, provide the object names, and let the model learn to recognize those objects automatically.
However, it's important to note that the quality and diversity of the dataset still play a crucial role in the performance of the resulting model. The model's ability to generalize and detect objects accurately depends on the variety and representativeness of the training images. Users should strive to collect a dataset that covers different object appearances, poses, backgrounds, and lighting conditions to ensure robust performance.
By providing these two training options, SenseCraft AI enables users to create custom object detection models that are optimized for their specific needs. The Quick Training option is ideal for users who need a simple, single-class recognition model and want to generate it quickly. On the other hand, the Image Collection Training option is suitable for users who require a more accurate and customized model, as it allows them to provide their own training data in the form of object names and images.
Publishing Models
SenseCraft AI is a platform that supports content co-creation for developers and modelers! Share your results with the global developer community. Meanwhile, through our AI open platform, you will have the opportunity to combine your AI models with commercialization needs, providing valuable solutions for enterprises and users in different industries. We look forward to your participation and contribution to jointly realize the innovation and application of AI technology in the commercial field!
- To add a model you need to complete the following information:
- Model Name
- Model Excerpt: A simple description of the model
- Model Introduction:detailed description of the model
- Model Deployment Perparation:Pre-requisite for model deployment, not required
- Supported Device:Choose which device the model will be deployed on, currently the platform supports Jetson devices, XIAO ESPS3, etc.
- Model Inference Example Image :Upload an image of the model's inference results
- Click next when the information is completed.
- Enter the follow information about the model parameters.
- Publish the model to the public AI model library is checked by default, the model will be visible to everyone after saving, if unchecked, the model will be visible only to you after saving.
Content | |
---|---|
Model Format | 1 The correct format for the model 2 Option:ONNX, Tensor RT, Pytorch 3 Platform will support more model formats |
Task | 1 The task type of the model 2 Option:Detection,Classification,Segment,Pose |
AI Framework | 1 The AI framework of the model 2 Option:YOLOV5,YOLOV8,FOMO,ModileNetV2,PFLD 3 Platform will support more AI framwork |
Classes | 1 Classes or labels that the model categorizes for a specific task or problem 2 Please make sure the class id and class name matches correctly. |
Model File | Upload a model file in the format of your choice. |
Model Precision | 1 model precision 2 Option:Int8,Float16,Float32 |
To ensure the healthy development of our platform, we will review the models and content posted by users. If any illegal, non-compliant, or infringing content is found, it will not be allowed to be published and may be deleted accordingly. Thank you for your understanding and support in maintaining a healthy platform environment!
Custom AI Model Management
Users have all the permissions to operate their own models.
Publish Model: Publish a private model that will be available to all users.
Privatize Model:Privatize a public model, the model is only visible to yourself.
Delete Model: Delete a private model, public models are not allowed to be deleted.
Edit Model:Allow to modify all information of the model.
Vision Workspace
The Vision Workspace section of SenseCraft AI is dedicated to device-specific operations and deployment of trained models. It provides a seamless interface for users to integrate their custom models with various hardware devices and preview the results in real-time. Currently, the supported devices include Grove Vision AI V2, XIAO ESP32S3 Sense, NVIDIA Jetson, and reCamera.
Model Deployment and Preview
Once a user has successfully uploaded a trained model, they can navigate to the device-specific page within the Vision Workspace. Under the "Process" section, users can observe the real-time detection feed from the connected device, allowing them to preview the model's performance in action.
This real-time preview feature is particularly valuable as it enables users to assess the model's accuracy and effectiveness in detecting objects within the device's video stream. Users can visually inspect the bounding boxes, labels, and confidence scores generated by the model, providing immediate feedback on its performance.
Model Fine-tuning
In addition to real-time preview, the Vision Workspace also offers the capability to fine-tune the model's confidence threshold parameter. This feature allows users to adjust the model's sensitivity to object detection, enabling them to strike a balance between precision and recall.
By manipulating the confidence threshold, users can control the model's behavior in terms of detecting objects. A higher confidence threshold will result in the model being more selective, only detecting objects with a high degree of certainty. Conversely, a lower confidence threshold will make the model more sensitive, detecting objects even with lower confidence scores.
This fine-tuning capability empowers users to adapt the model to their specific requirements, optimizing its performance based on the characteristics of their application and the environment in which the device operates.
Output and Application Development
The Vision Workspace goes beyond model deployment and preview by providing users with the tools to quickly prototype and develop applications using the trained models. The "Output" section offers a range of options for users to interact with the model's results and integrate them into their desired applications.
Taking the XIAO ESP32S3 Sense as an example, the Vision Workspace supports various communication protocols and interfaces, such as MQTT, GPIO, and Serial Port. These options enable users to seamlessly transmit the model's output to other systems, trigger actions based on object detection, or perform further processing on the detected results.
By offering these output options, SenseCraft AI simplifies the process of integrating the trained models into practical applications. Users can quickly experiment with different communication methods and develop prototypes that leverage the object detection capabilities of their models.
For instance, a user could utilize the MQTT output to send real-time object detection data to a remote server for monitoring or analytics purposes. Alternatively, they could use the GPIO output to trigger physical actions, such as turning on a light or activating an alarm based on the presence of specific objects.
The Serial Port output provides a straightforward way to establish communication between the device and other systems, enabling users to transmit the model's results for further processing or visualization.
Tech Support & Product Discussion
Thank you for choosing our products! We are here to provide you with different support to ensure that your experience with our products is as smooth as possible. We offer several communication channels to cater to different preferences and needs.