How to install ROS on Windows 10

Now any ROS developer worth his salt would recommend you to start developing ROS on Ubuntu. If you are in the 2% who use Ubuntu or any other Linux based OS then that advice is spot on, and you will probably be up and running in a couple of hours. But what if you use Windows like 75% of the people out there?

Source: https://gs.statcounter.com/os-market-share/desktop/worldwide

You would stumble at every step until that nightmarish learning curve gives you sleepless nights, and jitters whenever you hear the word ‘terminal’. Now, I am not saying that you shouldn’t give Ubuntu a shot if you have only ever worked on Windows. Ubuntu is a great OS! I have been using it for more than 10 years now as my daily driver.

But you just want to learn ROS, so why bother with learning to navigate a whole new OS for it? Microsoft has got your back (and probably data) on this. The big M is fully onboard the ROS game and now sits in the core committee that decides the future of ROS. They have done enough porting and development in the past few years that it is now possible to enjoy ROS in the comfort of your Windows.

So without any further yip-yap lets start our ROS journey on Windows 10. Here is what you need to do:

Step 1: Make sure that you are using either a 64 bit version of Windows 10 Desktop or Windows IoT Enterprise. If not, learn Ubuntu.

Step 2: Install Windows PowerShell using this article as a reference.

Step 3: Add the ‘C:\opt’ location, where we will be installing ROS, as an exception on your antivirus software. That thing will probably interfere with the installation and use.

Step 4: You will need 10 GB of space on your C: drive, and you will need to free up the ‘C:\opt’ location, if its being currently used. There is no way to install ROS anywhere else at the time of writing of this article.

Step 5: Download Visual Studio 2019 from here and install it with the following configurations:
– Select the “Desktop Development with C++” option in the workload section.
– Select “Windows 10 SDK (10.0.19041.0)” in the individual components section.
– Select the “English” language pack – there is no support for other languages in ROS yet.

This might take some time, so grab a glass of water because you want to be healthy.

Step 6: Click on start menu and search for “x64 Native Tools Command Prompt for VS 2019”, right click on it and open it as administration.

Step 6: A package manager makes it much easier to install and maintain the various ROS libraries and tools, in Ubuntu ‘apt’ takes care of it, on Windows 10 it will be ‘Chocolatey’. Paste the following into the command window you just opened.

@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

Hit enter on this long ass command and watch it install chocolatey, accept any prompts that come up. Now close the command window and relaunch it, same as before.

Step 7: Type/copy-paste the following into the new command window to install git.

choco upgrade git -y

Use the following command to test if git is installed correctly,

git --version

Close the window again and reopen it the same way.

Step 8: Now comes a choice you must make, ROS 1 or ROS 2. Check out this article for finding out what’s the difference between the two.

If ROS 1 is your choice, then enter the following command into the command window.

mkdir c:\opt\chocolatey
set ChocolateyInstall=c:\opt\chocolatey
choco source add -n=ros-win -s="https://aka.ms/ros/public" --priority=1
choco upgrade ros-noetic-desktop_full -y --execution-timeout=0

If it’s ROS 2 that you chose, enter the following.

mkdir c:\opt\chocolatey
set ChocolateyInstall=c:\opt\chocolatey
choco source add -n=ros-win -s="https://aka.ms/ros/public" --priority=1
choco upgrade ros-foxy-desktop -y --execution-timeout=0

It will take some time, go take a break and watch the leaves grow on a tree near you.

Step 7: Each time you open a new command window, you will to run a script to get all the ROS commands working. It is a bit tedious to do that every time, so we will create a shortcut to do that for us every time we open a new command window.

Right click on your desktop and select New > Shortcut

Depending on the Visual Studio version you have installed, type the following in that opened window.

For Community:

C:\Windows\System32\cmd.exe /k "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Tools\VsDevCmd.bat" -arch=amd64 -host_arch=amd64&& set ChocolateyInstall=c:\opt\chocolatey&& c:\opt\ros\noetic\x64\setup.bat

For Professional:

C:\Windows\System32\cmd.exe /k "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\Tools\VsDevCmd.bat" -arch=amd64 -host_arch=amd64&& set ChocolateyInstall=c:\opt\chocolatey&& c:\opt\ros\noetic\x64\setup.bat

For Enterprise:

C:\Windows\System32\cmd.exe /k "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\VsDevCmd.bat" -arch=amd64 -host_arch=amd64&& set ChocolateyInstall=c:\opt\chocolatey&& c:\opt\ros\noetic\x64\setup.bat

And then name the shortcut as “ROS” or whatever your heart desires, it will open the command window anyway.

Now we will need to make the shortcut always open as an administrator. Right click on shortcut and navigate to properties, choose the advanced… option and check the box for “Run as administrator”. Hit OK and then Apply the changes.

As a final test, lets run ROS . In the opened window, type

roscore

and hit enter, you should see the following happen.

Congratulations! ROS is now up and running on your Windows 10!

Your next steps would be to read through the excellent tutorials at ROS wiki. And if you have a question to ask go to answers.ros.org and post your question with a windows tag. Enjoy!

Mudskipper Robot

Mudskippers are the coolest fish on the planet. Why? Because they can breath on land and walk around without a care. Their unique style of movement is cute which I tried to replicate this in this project and ended up creating an Albino Alpha Mudskipper Robot! AAMuR for short. Check out the video tutorial above to see how I made it.

Interested in making one for yourself? Here are the things that you will need:

I designed the main body on Fusion 360. You can find the STL files for this project on thingiverse here.

3D printed parts

Use the following circuit diagram for the electronics.

Circuit Diagram for Mudskipper Robot

The code for this project is on Github here. Do check out the comments to how the robot works.

The final result.

You can control the robot using your phone. Just type in the IP address that you get from the serial monitor once you have uploaded the code.

That is it folks! See you in the next post!

Smart Lithopane Display

YouTube Tutorial

Photographs can be 3D printed and they are awesome! It may look garbage at first glance, but shine a light through it and you will be amazed by the details it captures. These prints are called lithopanes and in this blog post we are going to make it smarter.

Here are the things you will need (affiliate links):

  1. ESP01 : https://amzn.to/3tftM6n
  2. ESP01 USB Adapter/Programmer : https://amzn.to/3a82l6Y
  3. Neopixel LED Ring (8 LED): https://robu.in/product/8bit-ws2812-5050-rgb-led-built-full-color-driving-lights-circular-development-board/
  4. Printed Circuit board: https://amzn.to/3dh18Mv
  5. 5V to 3.3V Converter: https://amzn.to/32eNymw
  6. Micro USB breakout board: https://amzn.to/3mKKsjH
  7. Female berg connectors:https://amzn.to/3siKe4g
  8. Wires for connection
The stuff you need

Use the circuit diagram given below to make the connections.

Circuit Diagram

You will get something like this:

Electronics Assembly

Download the STL file for the body from here and print using your favorite 3D printer. I used Ender 3 with PLA filament for mine. I recommend white color which looks the best with all the lighting effects.

Display body printed using Ender 3

Go to itslitho website to generate the lithopanes. The resolution of the photograph should be 990×700 px for it to be compatible with the body. Set the frame thickness to 4mm and width to 2mm. Print the lithopane with 100% infill for th best results.

Lithopane printed using ender 3

Program the ESP01 with code from here. Refer to this article to learn how to program an ESP01.

Place the electronics assembly inside the body and slide the lithopane in from the side. Power it up by connecting it to a 5v charger using micro USB cable.

Use the web app to control the lighting effects. You can access the web app by connecting your device (phone/computer) to the same network as the ESP01 and typing in the IP address set in the code.

Web app

And you have your very own Smart Lithopane Display! See you in the next post.

Obstacle Avoiding Robot

Hey guys! Robotics series LvL 1 is here, we will build an obstacle avoiding robot this time! Using the popular Arduino platform and a few easily available components we will unleash an ever roaming unstoppable menace ( until the batteries die of course). Here are the things we will need:

These are affiliate links, do use them if you can to support the website.

Let us now prepare the chassis, you will need to place the motors on one end of the MDF board, co-axially, using a glue gun. Now the castor wheel goes at the opposite edge right in the middle, use a spacer to adjust the height so that the robot stays level. Use the following image as reference.

The robot body

Alright! Now that you have them, use the following circuit diagram to connect everything together.

Circuit Diagram

Use this link to get the code and upload it to the Arduino board, use this article if you are doing this for the first time.

Here is the completely assembled one.

Cute, is he not? Now turn it on and let it roam around and explore the world.

And that’s it! Hope you enjoyed this post, see you in the next one!

Give your next Robot 3D vision: Kinect V1 with ROS Noetic

Kinect sensors are an amazing piece of technology. I would even call it revolutionary. But sadly, the product was a bit ahead of its time. And now big ‘M’ has stopped producing it. They are going to bring back Kinect in another avatar but I don’t think it will be as cheap as its older brother.

Kinect 360’s primary target was gaming, which never took off. But another set of people recognised its value and adopted it. Roboticists, DIY enthusiast and Researchers in the field of computer vision recognised it for what it was, a cheap way to test and build their ideas. This, my dear friend is a fact even now. How do I know? I just bought it for the same purpose, it just arrived today.

Buut, this amazing sensor has a bit of a learning curve when it comes to making it run on your system for the first time. So if you are ever going to be in my shoes, let me try and make it at least a little easier for you. Follow me along as I take control of my Kinect 360 using ROS. I have tested this on Pop_OS! 20.04 with ROS Noetic installed.

As always, start with an update and upgrade.

sudo apt-get update
sudo apt-get upgrade

Install the dependencies

sudo apt-get install git-core cmake freeglut3-dev pkg-config build-essential libxmu-dev libxi-dev libusb-1.0-0-dev

Get the libfreenect repository from GitHub

git clone git://github.com/OpenKinect/libfreenect.git

Make and install

cd libfreenect
mkdir build
cd build
cmake -L ..
make
sudo make install
sudo ldconfig /usr/local/lib64/

To use kinect without sudoing every time

sudo adduser $USER video
sudo adduser $USER plugdev

Next we will add some device rules

sudo nano /etc/udev/rules.d/51-kinect.rules

Paste the following and ctrl+q to save.

# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666"
# ATTR{product}=="Xbox NUI Audio"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666"
# ATTR{product}=="Xbox NUI Camera"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02c2", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02be", MODE="0666"
# ATTR{product}=="Xbox NUI Motor"
SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02bf", MODE="0666"

Now we need to generate audio drivers for support, run the following to get ‘audios.bin’ file. Open a terminal inside the freenect folder and type,

python3 src/fwfetcher.py

Now we need to copy ‘audios.bin’ to a specific location.

sudo cp src/audios.bin /usr/local/share/libfreenect

Run the following to check the audio.

freenect-micview

You should see this

The waveform should reflect your speech. Next we will try the depth camera.

freenect-glview

Congratulations! Your Kinect now works on Ubuntu!

Now lets make it work with ROS. We need to get some launch files for visualization.

Create your catkin workspace directory, skip this step if you already have it setup.

mkdir -r ~/catkin_ws/src

Now we will download the required ROS package.

cd ~/catkin_ws/src
git clone https://github.com/ros-drivers/freenect_stack.git

Now we will use catkin make command to get the ROS on our system to recognise the examples.

cd ..
catkin_make

Source the setup file in newly created ‘devel’ directory so that our ROS environment can recognise the launch files.

source devel/setup.sh

Now we will launch the freenect example for depth registration which allows you to get the point cloud with RGB data superimposed over it.

roslaunch freenect_launch freenect.launch depth_registration:=true

Let’s now visualize the topics from Kinect on Rviz, open a new terminal and launch rviz.

rviz

We will now need to setup some parameters on rviz to visualize the depth registration data.

  1. In the ‘Global Options’ set the ‘Fixed Frame’ to ‘camera_link’.
  2. Add ‘pointcloud2’ object and set the topic to ‘/camera/depth_registered/points’

Now wait for a few seconds to get the points on display!

Me looking like a zombie

Hope I was of some help, best of luck on your Kinect Adventures! See you in the next post!

References:

  1. https://doc.bwbot.org/zh-cn/books-online/xq-manual-en/topic/557.html
  2. https://naman5.wordpress.com/2014/06/24/experimenting-with-kinect-using-opencv-python-and-open-kinect-libfreenect/
  3. http://www.choitek.com/uploads/5/0/8/4/50842795/ros_kinect.pdf
  4. http://wiki.ros.org/ROS/Tutorials/CreatingPackage

DIY Home Automation: Motion Sensing Switch

DIY Motion Sensor Switch

I have this habit of never turning off lights in my bathroom when I get out, it stays on for the entire night and in the morning I get an earful from my Mom or my flatmates depending on where I am. After acknowledging that I will never be breaking this habit, I did what was an obvious choice , I built a motion sensor switch. It automatically turns off light in the bathroom if no one is inside and turns on when somebody enters it. Trust me, this was much more useful than I thought it would be. So here is a guide on how to make one if you are like me.

A list of things you need to get started:

  1. PIR Motion Sensor Module x 1 : Banggood | Amazon
  2. 5V Relay Module x 1 : Amazon | Banggood
  3. 5V AC to DC converter x 1 :
  4. Female DuPont Cables : Amazon | Banggood
  5. 18 AWG wire(for AC connections).
  6. A suitable enclosure.

Disclosure: These are affiliate links, I get a small commission for qualifying purchases.

Things you need

Got them? Lets begin the build. Refer to the connection diagram below to wire up the components.

Test Circuit

Test the circuit to see if it detects motion by swiping your hand across the PIR motion sensor. Use the timing and sensitivity dials( left and right respectively in the image below) to set it to the required time and range. Timing dial controls the on time of the sensor for each detection, it can be from 3s to 5min. You can set the sensitivity to adjust the range of detection from 3m to 7m.

Download STL files from here and print them to get your enclosure. Don’t have a 3D printer yet? Don’t worry, a simple plastic box would do.

Install it in your bathroom (or wherever) by referring to the circuit diagram given above.

Done and done! Simple, wasn’t it? Enjoy the surprise on your guest’s face when they enter your bathroom ;p !

Follow this excellent tutorial if you want to learn more about PIR motion sensors.

Thank you for reading, hope you enjoyed it, see you in the next post!

Headless Ubuntu Server on Raspberry Pi

Raspberry Pi is an awesome single board computer. People have made all sorts of cool things using these versatile and compact packages. And if you are one of those people you might have started your journey with Raspberry Pi OS ( or Raspbian if you prefer the old name) which is the default recommendation and frankly, it is great! Optimised to run on a Pi be it any version from zero to four. But there are tons of OS options to try out there, and Ubuntu Server is my favourite among them!

In this article we will explore how to install Ubuntu Server on your Raspberry Pi. Ubuntu server is a good choice if you want to use your Pi as a server, a headless system taking care of the good stuff in the background. You might also want to save some resources by giving the GUI a skip so that the important stuff has more juice to run on. We will go by an headless installation, i.e no need to connect your Pi to a Keyboard-Mouse-Display, instead we will use SSH from our desktop to talk to Pi.

I have tested this method on Raspberry Pi 4 and with Ubuntu Server 20.04 LTS. I think it will work for other combinations as well. I am using a Linux based desktop (Pop_OS! 20.04) so if you want to try this from windows you will have to make some adjustments. So let us begin!

Pop the micro SD card into your PC. And install Raspberry Pi Imager from the repository using the following command.

sudo apt update
sudo apt install snapd
sudo snap install rpi-imager

Launch Raspberry Pi Imager and you will see something like this

Click on ‘CHOOSE OS’ button and select the Ubuntu Server version you want to install, I chose the 32 bit LTS version.

Now we will select the SD card to which we want to write the image to, please check and double check to make sure that you have chosen the right one otherwise you will loose your data. Then click on ‘WRITE’ and confirm your choice.

This will write the image to the SD card, once it is finished you will be prompted to remove the card, but don’t do it yet! We will need to configure WiFi for our headless setup.

Go to the newly created ‘system-boot’ partition and open up a terminal in it. Use your favourite editor to do the next step, I have used gedit.

sudo gedit network-config

This will open up a network configuration file, edit it with your WiFi SSID and password as shown below, and be careful with the indents.

Save the file, safely remove the SD card and pop it into your Pi, and then power it on. Your Pi should be able to connect to the WiFi you have provided in the previous step. Make sure that you are connected to the same network as well. To SSH into your Pi we need its IP Address. Check your desktop’s IP address using the following commands on a terminal.

ifconfig

Use the first three segments of your IP address and use the following command on the terminal to find the IP address of your Raspberry Pi.

sudo apt install nmap
sudo nmap -sP 192.168.18.0/24

Now that we know the IP address we can SSH into Pi using the following commands.

ssh ubuntu@192.168.18.80

When it prompts for confirmation type ‘yes’ and then when asked the password of the host, type ‘ubuntu’ which is the default , you will be prompted to change your password on the first login. Then the connection will close automatically, restart ssh using the new password.

Yay! We can now use Pi from our desktop! You can also install your favourite desktop environment if you need a GUI. But the main purpose of this exercise is to obtain a bare bones system on which we can do our heavy stuff. So what are you waiting for? Go Pi your heart out!

I hope you enjoyed this post, see you in the next one!

Mycroft: An open source smart assistant for your everyday needs

Smart assistants are here and they are super cool! With voice commands you can enjoy music, trivia, shop online and control appliances! And if you are not living under a rock, chances are, you have heard of ‘Alexa’ from Amazon, ‘Siri’ from Apple, ‘Cortana’ from Microsoft and ‘Google Assistant’ from well..Google(they seriously need to work their naming game), they are soaring in their popularity with awesome features and frankly, aggressive marketing.

The good side is that they are useful and allows us to interact with technology in a more natural way, the bad side? They are listening, and you don’t know where, when, what or how these big corporations behind them are using your data, and since laws are still catching up to the tech, we are in murky waters. We also don’t have any control over the inner workings, they are what their parent company wants them to be. Enter Mycroft, the open source assistant!

They are listening

Mycroft has a lot of features in common with its elder brethren, and offers something they don’t. You have the freedom to choose how it works, you can take a look at the algorithms behind their code, and if willing and able, contribute to it as well. We need this, a competitor who offers an alternative that is open to our scrutiny, the importance of this cannot be overstated.

Did I get your attention yet? Good, Let’s us now see how we can use it! If you are a fellow Raspberry Pi fan, good news, they have a Raspbian based image called PiCroft which can be downloaded from here. Install it as you would your regular Raspbian. For those who already have an OS running on your Pi or would like to try it on your regular desktop, follow the instructions below.

I will assume that you know your way around a terminal and are going to install it on Raspbian, Ubuntu, Arch or Debian. I have tested this on Ubuntu 20.04 Server LTS running on Raspberry Pi 4 with 2GB RAM.

For Ubuntu Server you will need to install some perquisites and change some settings. For others skip this step.

sudo apt-get install linux-sound-base alsa-base alsa-utils alsa-firmware-loaders alsa-oss alsa-source alsa-tools alsa-tools-gui alsa-utils alsamixergui

sudo usermod -a -G audio <your username>

We will use git to download the latest Mycroft from their Github page.

cd ~/
git clone https://github.com/MycroftAI/mycroft-core.git

Run the following command, and you should see a folder called mycroft-core.

ls -a 

Get inside mycroft-core directory and run dev_setup.sh. This will start an interactive session with several questions and permissions that need to be answered in a Yes/No format. Almost all will be Yes if you are not a developer, but do read them carefully. It will then install all the required packages. Get a coffee, this might take some time especially on a Pi, but don’t doze off, you will need to input your password at the end to finish the setup.

cd mycroft-core
bash dev_setup.sh

After about 30 min on a Pi 4, the installation should finish. Lets test this installation, use the following command to start Mycroft.

~/mycroft-core/start-mycroft.sh debug

This will open Mycroft in debug mode, it will help you check if your audio is working and if the wake-word is getting detected. Now say ‘Hey Mycroft, pair my device’ . This will give you an alphanumeric code which will be used to register your device on https://home.mycroft.ai by creating an account there. This is used to setup regional settings like time zone, location etc for weather and other location and time based updates. You can opt in to their open dataset program where they use your wake-word recordings to better train the network behind the detection, you have an option to opt out as well.

To stop Mycroft type:

~/mycroft-core/stop-mycroft.sh

Troubleshooting: Frankly, It is not that easy to setup Mycroft, you can run into many problems, particularly regarding audio. But if you can learn to enjoy that, it is a rewarding experience. Click here to go to the troubleshooting page.

PS: for me the trouble was with pulseaudio, after hours of search on the internet I found a solution: I had to type ‘pax11publish -r’ in the terminal. I was SSHing into my Raspberry Pi and pulseaudio was unable to start due to some conflict with X11.

Next we will write a service script to launch Mycroft at startup.

sudo nano 
etc/systemd/system/mycroft.service

Now type/copy paste the following, be sure to make some changes in the path to reflect your configuration, e.g. your home folder location and username.

[Unit]
Description=Mycroft AI
After=pulseaudio.service

[Service]
User=<your username>
WorkingDirectory=/home/<your username>/
ExecStart=/home/
<your username>
/mycroft-core/bin/mycroft-start all
ExecStop=/home/
<your username>
/mycroft-core/bin/mycroft-stop
Type=forking
Restart=no

[Install]
WantedBy=multi-user.target

After saving the file and closing it, type the following to enable the service.

sudo systemctl enable mycroft.service

Use the following commands to start or stop the service.

service mycroft start
service mycroft stop

That concludes this article, I hope it was of some help. In later articles, we will explore how to create custom skills for Mycroft! See you in the next one!

RoachBot: The DIY Light Avoiding Robot

Robots are fun! Those crawling, walking, roving, flying, swimming little machines are a joy to play with, but would it not be even better if we made one ourselves? Lucky you! we are gonna do just that! This article is the very first installment of the robots series that I have planned, we will start with the very basics and build our skill set and tools as we move along this wonderful journey to create more complex oh-my-god-did-you-make-it-yourself?! kind of robots! So hop on, strap in, let’s begin.

Watch and build!

RoachBot is your average robot which hates light, he runs away from it. He will do anything in his capacity to be in the darkness. So what’s he made of? I have got a list (Buy from the links if it doesn’t cost you any extra):

  1. Light Dependent Resistor (LDR) x 2
  2. BD139 NPN Transistor x 2
  3. 50k Potentiometer x 2
  4. 150 Ohm Resistor x 2
  5. Breadboard x 1
  6. 9 V Battery with Clasp / Powerbank with USB Cable x 1 :
  7. Jumper/DuPont Cables x a bunch
  8. To make the robot body you can buy an inexpensive kit like this or buy them individually using below links. If you get this kit no need for rest of the stuff below.
  9. DC Motor x 2
  10. Wheel x 2
  11. Castor Wheel x 1
  12. MDF Board x 1 (a 150mm x 150mm would do nicely)

Got em? Lets begin! The first step is to assemble the controller, refer to the diagram below to make your controller, ignore the motors and battery for now.

The brains are done, now comes the body. We will use a standard differential drive, Use a glue gun to attach motors on both sides of the MDF board as shown below. Drill two holes to let the wires from the motor through. Use a piece of wood or anything you get your hands on to attach the castor wheel at the front so that the rover stays horizontal to the surface.

Yep I used screws, but hey no need for that, just glue it.

Now lets combine everything! Make the rest of the connections by referring to the diagram above. And when you are done, you will have this!

The RoachBot

All done! But where is the code? There is no code, it works just like that! So what are you waiting for? Build one! I will see you in the next post with more cool stuff like this!

Alexa Enabled Smart WiFi Switch

Hello world! Smart homes are coming and nothing short of an apocalypse can stop them. And no, we will not DIY an apocalypse, instead we will jump on the band wagon and build a smart switch to call our own. Read on to jump into the build!

  1. Following are the components you’ll need (The links given are affiliate ones, do consider buying from them if it doesn’t cost you any extra):
    1. ESP01: https://amzn.to/2ZTqusE
    2. 4 channel relay module: https://amzn.to/35WcemW
    3. 5V ac to dc converter: https://amzn.to/2Hl2Uig
    4. 5V to 3.3v step down power supply: https://amzn.to/35TFnPz
    5. Proto board: https://amzn.to/33PsNOy
    6. 10k resistor x 2
    7. Female header strip: https://amzn.to/3cfIPFw
    8. 1000uF capacitor: https://amzn.to/3cky0Sl
    9. Metal Oxide Varistor 400V (Optional)
    10. Jumper wires: https://amzn.to/32OCmhB
    11. Twisted 18AWG wires for AC
    12. A box to hold em all
Components

Step 2: Prepare the interface board by referring to the connection diagram given below.

Step 3: Place the relay inside the box and wire up the output of relay with the twisted wires as shown below. Attach MOV to across the output of whichever relay you will use to control an inductive load like a fan or CFLs.

Step 4: Download the code from https://github.com/mijazm/ESP01WifiSwitch and upload it to the ESP01 module. Check out my previous post if you need help.

Step 5: Connect the interface board to the relay as shown in the connection diagram below.

Step 6 : Power up the project and setup your echo device by saying “Alexa, discover devices”. This will take some time, once its completed you will be able to see new devices on your alexa app.

Step 7: Try the commands! Say “Alexa, turn on fan” or whatever device names you used in the code.

Step 8: Connect it your switch board referring to the images given below.

I hope you enjoyed this post, see you in the next one!