![Soft illusion](/img/default-banner.jpg)
- 114
- 578 752
Soft illusion
India
Приєднався 22 гру 2016
Interested in Robotics and Simulation and wish to make a difference!!? Then you are probably at the right place. :D
Soft_illusion is a channel which aims to help the robotics community!!
Robotics is not difficult..! It is inspiring and challenging..!! We are in the current era of the Robotics Revolution, where Disco will be the name of a pet robot not dog.
This channel is driven with a motive to provide good robotics tools to aid everyone gain a good and simple understanding of this so-called complex robotics domain. Hopefully this will build the bridge to turn the novel ideas of viewers into reality.
Currently we are working to provide easy to learn tutorials on the Robot simulator called Webots.
The tutorials begins with the basic installation of the simulator, and ranges to higher level applications.
Following is the link to the playlist:
ua-cam.com/video/yi4e5FoVWbQ/v-deo.html
So let's Learn, code and have fun.
Soft_illusion is a channel which aims to help the robotics community!!
Robotics is not difficult..! It is inspiring and challenging..!! We are in the current era of the Robotics Revolution, where Disco will be the name of a pet robot not dog.
This channel is driven with a motive to provide good robotics tools to aid everyone gain a good and simple understanding of this so-called complex robotics domain. Hopefully this will build the bridge to turn the novel ideas of viewers into reality.
Currently we are working to provide easy to learn tutorials on the Robot simulator called Webots.
The tutorials begins with the basic installation of the simulator, and ranges to higher level applications.
Following is the link to the playlist:
ua-cam.com/video/yi4e5FoVWbQ/v-deo.html
So let's Learn, code and have fun.
Transforming Reality: The Magic Of 3d Reconstruction From Your Camera
#computervision #funwithrobots #stereocamera
0:01 Introduction 3D Point Cloud
0:33 How to extract depth from point cloud.
1:06 Trangulation and Stereo Matching
1:58 Generate Depth Map for 3D reconstruction
2:38 Conclusion
Why are 3D pointclouds so crucial in applications such as self-driving technology? It's simple.
They provide the essential depth information that is lost when we capture photos in 2D, due to a process called Perspective projection. This is also the reason why distant objects appear smaller. It’s also why parallel lines seem to converge towards a single point.
This depth information plays a crucial role in accurately measuring the distance and size of various objects on the road, allowing autonomous vehicles to navigate safely and efficiently.
Now, you might wonder, how do we extract this depth information? Well, one of the simplest methods is using a stereo camera layout. This layout mimics human binocular vision, with two cameras taking images shifted by a horizontal distance apart, similar to our left and right eyes.
The stereo camera layout provides the necessary relationship between the three-dimensional world and two-dimensional coordinate systems. This relationship allows the estimation of depth information for the viewing scene.To achieve this, we need to understand the extrinsic and intrinsic parameters of the camera.
Once the camera is calibrated, the next step is triangulation. This process uses the known positions and angles of the two cameras to estimate the location of points in the scene.
Then comes the task of stereo matching, which involves finding disparities or differences between the two images. This is done using a technique called template matching.
However, stereo matching presents its own set of challenges. For instance, the surfaces must have non-repetitive textures, and there may be differences in brightness or other features between the two images due to the foreshortening effect.
But these challenges can be overcome by choosing the appropriate window size for template matching. Too small a window may be less descriptive and sensitive to noise, while a large window might produce a more robust but blurred disparity map.Therefore, an adaptive window solution is often the best approach.
The final goal of 3D reconstruction is to generate a depth map, an image where every pixel contains depth information rather than color information. The accuracy of this depth map depends on the type of sensor used, with LiDAR, infrared, and cameras offering varying degrees of precision.
Stereo reconstruction, the process we've discussed today, uses simple cameras to generate depth maps. It's the same principle our brain and eyes use to understand the depth of our surroundings. With a calibrated camera, the correct parameters, and some clever techniques for overcoming challenges, it's possible to generate accurate 3D pointclouds for a variety of applications.
So, the next time you see an autonomous vehicle navigate smoothly on the road or marvel at the precision of augmented reality, remember the crucial role of 3D pointclouds and the power of stereo camera layouts in making these technological marvels possible. Each subscription not only means you get access to a wealth of knowledge, but it also supports our efforts in creating more such informative content. So why wait? Hit that subscribe button .
0:01 Introduction 3D Point Cloud
0:33 How to extract depth from point cloud.
1:06 Trangulation and Stereo Matching
1:58 Generate Depth Map for 3D reconstruction
2:38 Conclusion
Why are 3D pointclouds so crucial in applications such as self-driving technology? It's simple.
They provide the essential depth information that is lost when we capture photos in 2D, due to a process called Perspective projection. This is also the reason why distant objects appear smaller. It’s also why parallel lines seem to converge towards a single point.
This depth information plays a crucial role in accurately measuring the distance and size of various objects on the road, allowing autonomous vehicles to navigate safely and efficiently.
Now, you might wonder, how do we extract this depth information? Well, one of the simplest methods is using a stereo camera layout. This layout mimics human binocular vision, with two cameras taking images shifted by a horizontal distance apart, similar to our left and right eyes.
The stereo camera layout provides the necessary relationship between the three-dimensional world and two-dimensional coordinate systems. This relationship allows the estimation of depth information for the viewing scene.To achieve this, we need to understand the extrinsic and intrinsic parameters of the camera.
Once the camera is calibrated, the next step is triangulation. This process uses the known positions and angles of the two cameras to estimate the location of points in the scene.
Then comes the task of stereo matching, which involves finding disparities or differences between the two images. This is done using a technique called template matching.
However, stereo matching presents its own set of challenges. For instance, the surfaces must have non-repetitive textures, and there may be differences in brightness or other features between the two images due to the foreshortening effect.
But these challenges can be overcome by choosing the appropriate window size for template matching. Too small a window may be less descriptive and sensitive to noise, while a large window might produce a more robust but blurred disparity map.Therefore, an adaptive window solution is often the best approach.
The final goal of 3D reconstruction is to generate a depth map, an image where every pixel contains depth information rather than color information. The accuracy of this depth map depends on the type of sensor used, with LiDAR, infrared, and cameras offering varying degrees of precision.
Stereo reconstruction, the process we've discussed today, uses simple cameras to generate depth maps. It's the same principle our brain and eyes use to understand the depth of our surroundings. With a calibrated camera, the correct parameters, and some clever techniques for overcoming challenges, it's possible to generate accurate 3D pointclouds for a variety of applications.
So, the next time you see an autonomous vehicle navigate smoothly on the road or marvel at the precision of augmented reality, remember the crucial role of 3D pointclouds and the power of stereo camera layouts in making these technological marvels possible. Each subscription not only means you get access to a wealth of knowledge, but it also supports our efforts in creating more such informative content. So why wait? Hit that subscribe button .
Переглядів: 171
Відео
Let the cat help you code | Tabby | LLM for coding
Переглядів 2393 місяці тому
Tabby link : tabby.tabbyml.com/ Installation link : tabby.tabbyml.com/docs/installation/ Tabby is an open-source, self-hosted AI coding assistant. With Tabby, every team can set up its own LLM-powered code completion server with ease. Installation can be done using Docker, Docker Compose, HomeBrew (Apple M1/M2), Windows, Hugging Face Spaces, Modal and SkyPilot Serving. Tabby Principles : Open:...
Custom Robot in 15 min | Issac Sim | Nvidia Issac
Переглядів 3,6 тис.11 місяців тому
Wish to get into the shoes of a Robotics Software Engineer without having to invest on any actual hardware. NVIDIA Omniverse™ Isaac Sim is a robotics simulation toolkit for the NVIDIA Omniverse™ platform. Soft_illusion Channel is back with a new video to show you how you can make your own robot on this simulator..!! (A channel that aims to help the robotics community). #ISSACsim #nvidia #roboti...
Get Started With Issac Sim Today! Adding Physics And Shapes Has Never Been Easier.
Переглядів 2,3 тис.11 місяців тому
Wish to get into the shoes of a Robotics Software Engineer without having to invest on any actual hardware. NVIDIA Omniverse™ Isaac Sim is a robotics simulation toolkit for the NVIDIA Omniverse™ platform. Soft_illusion Channel is back with a new video to get you started with the 101 of using this simulator..!! (A channel that aims to help the robotics community). #ISSACsim #nvidia #robotics 0:2...
3D mapping using OctoMap | Webots | Pointcloud
Переглядів 5 тис.Рік тому
#octomap #octovis #pointcloud #computervision OctoMap tutorial: octomap.github.io/ Paper showing OctoMap implementation using OcTrees: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/www.arminhornung.de/Research/pub/hornung13auro.pdf GitHub Repo to find Octovis package: github.com/OctoMap/octomap Link to Robotics Picodegree by Soft Illusion: UA-cam playlist link: ua-cam.com/play/PLt69C9MnPc...
Make your simulation realistic | Pedestrian control with keyboard | Webots
Переглядів 2,3 тис.2 роки тому
Make your simulation realistic | Pedestrian control with keyboard | Webots
Building an Attack insect robot | Kamigami Musubi Robot
Переглядів 1 тис.2 роки тому
Building an Attack insect robot | Kamigami Musubi Robot
How median filter works ? | Image Processing | Computer Vision | OpenCV | Image Smothing blur
Переглядів 4,6 тис.2 роки тому
How median filter works ? | Image Processing | Computer Vision | OpenCV | Image Smothing blur
Which is better filter for Gaussian noise | Gaussian Filter | Bilateral Filter | ComputerVision Blur
Переглядів 4,1 тис.2 роки тому
Which is better filter for Gaussian noise | Gaussian Filter | Bilateral Filter | ComputerVision Blur
What is SLAM? | Concept with a story | Localization | Mapping | Robotics Concepts
Переглядів 1,4 тис.2 роки тому
What is SLAM? | Concept with a story | Localization | Mapping | Robotics Concepts
How mean filter in Image Processing works ? | Computer Vision | OpenCV | Image Smothing blur
Переглядів 6 тис.2 роки тому
How mean filter in Image Processing works ? | Computer Vision | OpenCV | Image Smothing blur
YOLO Object Detection with ROS | Darknet_ros | Webots ROS | Robotic Software PicoDegree | Part 7
Переглядів 10 тис.2 роки тому
YOLO Object Detection with ROS | Darknet_ros | Webots ROS | Robotic Software PicoDegree | Part 7
Obstacle avoiding robot | MoveBase using Lidar | Webots ROS | Robotic Software PicoDegree | Part 6
Переглядів 7 тис.2 роки тому
Obstacle avoiding robot | MoveBase using Lidar | Webots ROS | Robotic Software PicoDegree | Part 6
Navigation simulating | Movebase : Navigation Framework for ROS | Costmap | Planner | Part 5
Переглядів 28 тис.2 роки тому
Navigation simulating | Movebase : Navigation Framework for ROS | Costmap | Planner | Part 5
GMapping | ROS with Webots | Robotic Software PicoDegree | Part 4 | Best mapping package
Переглядів 10 тис.2 роки тому
GMapping | ROS with Webots | Robotic Software PicoDegree | Part 4 | Best mapping package
What is GMapping ? | Theory | ROS | Example | Robotics Concepts
Переглядів 5 тис.2 роки тому
What is GMapping ? | Theory | ROS | Example | Robotics Concepts
Teleoperating Robot with keyboard | ROS with Webots | Robotic Software PicoDegree | Part 3
Переглядів 3,9 тис.2 роки тому
Teleoperating Robot with keyboard | ROS with Webots | Robotic Software PicoDegree | Part 3
Making of URDF | RViz | ROS with Webots | Robotic Software PicoDegree | Part 2
Переглядів 3,9 тис.2 роки тому
Making of URDF | RViz | ROS with Webots | Robotic Software PicoDegree | Part 2
Robotics Software Engineer PicoDegree | Introduction | ROS with Webots | Part 1
Переглядів 6 тис.2 роки тому
Robotics Software Engineer PicoDegree | Introduction | ROS with Webots | Part 1
Ready to use Webots with NO Installation | Docker for Webots
Переглядів 1,3 тис.3 роки тому
Ready to use Webots with NO Installation | Docker for Webots
Gesture Controlled Robot Car | Arduino Project | Part 2
Переглядів 1,3 тис.3 роки тому
Gesture Controlled Robot Car | Arduino Project | Part 2
Installation and Introduction to Webots | Make your first custom world in minutes
Переглядів 30 тис.3 роки тому
Installation and Introduction to Webots | Make your first custom world in minutes
All you need to know about ROS Messages | Tutorial
Переглядів 3,3 тис.3 роки тому
All you need to know about ROS Messages | Tutorial
All you need to know about TF and TF2 in ROS | Tutorial
Переглядів 17 тис.3 роки тому
All you need to know about TF and TF2 in ROS | Tutorial
Spy robot | Gesture Controlled Robot Car | Arduino project | Part 1
Переглядів 6273 роки тому
Spy robot | Gesture Controlled Robot Car | Arduino project | Part 1
RFID based Alarm Clock | DIY Arduino Project
Переглядів 3,5 тис.3 роки тому
RFID based Alarm Clock | DIY Arduino Project
Robots Wishes | Happy New Year | 2021 | Webots
Переглядів 1,5 тис.3 роки тому
Robots Wishes | Happy New Year | 2021 | Webots
Project implementation of AR-tag detection | OpenCV | Computer Vision ROS2 Tutorials | [Tutorial 13]
Переглядів 4,1 тис.3 роки тому
Project implementation of AR-tag detection | OpenCV | Computer Vision ROS2 Tutorials | [Tutorial 13]
your code is available on google ?
How do you add other environmental objects like tables , walls etc ?
step by step please~
can this be used with Gazebo?
Thanks What kind of mean filter is this that you implemented in this 6x6 image? bcz according to my knowledge there is at least 4 type of mean filter in spatial domain
does not work anymore. you cannot input xacro files in the urdf2webots.importer command anymore: python -m urdf2webots.importer --input=myrobot.xacro "myrobot.xacro" is not a URDF file.
why the anchor of the cylinder is -0.075? if the lenght of the other piece is 0.15? i can see the relation for why you choose this figure
Robot is going below ground i have selected the rigid body though its not working
I was following along but my nav_link is still showing as down. No transform from nav_link to base_link. Can you help me fix this?
hey, for me the nav_link is still down. It says no transform from nav_link to base_link. Can you help me?
hey, i was following along and the nav_link for me is still down. no transform from nav_link to base_link. Do i add the link like we did all the others?
Excellent WORK.....
Hi sir, may I know if there is a way to find the dimension sizes of the objects in the octomap?
please CAN YOU SHARE THE FILES OF THIS PROJECT
Very helpful & simple to understand explanation
Glad it was helpful!
very nice sir ! Thanks a bunchy bunch !
My camera window not opening upon running the code
Cool stuff..
What is the logic behind the value of 64 milliseconds?
My robot passes right through the walls i made from your earlier tutorials.
need some videos on Nao Robot and how to program it.
My Webots is R2023b and it doesnt have any option of scaling in solid. DO you know the reason?
use a Transform Node, and put your solid as its child. Transform Node has scale properties.
@9:08 when i do rostopic echo /model_name. I didnt get any output. So what should i keep in the code for model_name ? TIA
hello ,need help with my project , need someone with good understanding of webots
How did you implement odom that when moving inside a 3d simulation (gazebo) that doesn't lose localization? When I use navigation, after manual movement, and amcl loses location, and I'll have to give with rviz. Do you have a group or community on social networks that will help you learn ros1 and ros2?
do you have any video and cod for traffic lights detection and their states in webots?
can we somehow replace the interactive marker with an absolute value, for example i want to +x just by pushing 'w' on my keypad?
thank you so much, is clear and well explained
Please do more videos on Isaac sim :)
We will . Thanks for the comment
Thank you!
How to use the navigation stack with the Pioneer robot (with rosary) and the rplidar a3 sensor (Hector Slam or Gmapping library) Hey, My question is how to use the ros Navigation Stack library to perform autonomous navigation on the pioneer robot that I am using the rosaria library for control, I am also using the rplidar_03 sensor with the hector slam library in the project. If anyone can help me :)
How to use the navigation stack with the Pioneer robot (with rosary) and the rplidar a3 sensor (Hector Slam or Gmapping library) Hey, My question is how to use the ros Navigation Stack library to perform autonomous navigation on the pioneer robot that I am using the rosaria library for control, I am also using the rplidar_03 sensor with the hector slam library in the project. If anyone can help me :)
Can this be done with a monocular camera?
No, you need the point cloud. So either RGB-D or stereo camera or LIDAR to get the depth information.
I have 3 of these robots that my son wants to build but I can’t find the APP for the instructions and to connect them. Is it still available?? I’m using an Apple device.
They took down the app from App Store 😑
Darn, okay, thanks for your reply!
Hey this is really awesome video!! But the music is really loud and your speaking is a lot quieter so it hurts ears when the music pops up.
Can i use supervision to link cube with epuck?
Why my solid is sinking into the floor? I changed the mass, contact material etc. , and it still sinks
This work with ROS or ROS2?
great work.. However, i am not able to see the webots simulation because when installing webots the display stops at welcome screen. any possible solution?
Can we also visualize custom messages-based data i.e. I have lidar based object list can I visualize it in rviz2?
Sir, i want to move aruco marker randomly is it possible? If yes, how can I do it? Thank you!
Sir, i want to move aruco marker randomly is it possible? If yes, how can I do it? Thank you!
Do you have python code for GPS integration?
I am getting an error while trying to launch master.launch "process[webots-2]: started with pid [40645] WEBOTS_HOME environment variable not defined." Please help.🙏🙏
For people who don't turn, check to see if the wheels are in the right order. The correct order of wheels in the code is left front (1), right front (2), left rear (3), right rear (4). Also if your wheels are not spinning but rolling, turn the robot in the direction where the wheels are spinning, also the red line is the direction of the wheels.
Hi, thank youu :) can you do more of these videos? I have a project in isaac sim, and i don't know hpw to build a robot in it. A littlr more functioning robot than this one. I need your help :)you are the only one who posted the tutorial on bulding robots inside the isaac sim
In my service, robot name is not showing, only service is displayed. rostopic echo /Cam_robot_ give me this (WARNING: topic [/Cam_robot_] does not appear to be published yet.
saaame did you slove it please
answer me as soon as you can pls
Awesome video, how Can we add gripper in it?
thank you, It was a very informative series. Do you have any idea about the same fo biped robot?
WEBOTS_HOME environment variable not defined error comes when i do master,launch. neither am i able to install webots