One of the tools we’ve been looking to build with the AC Training Lab is an OpenFlexure microscope. Rodeostat recently began selling a kit version, and one of the AC summer students has started to build it!
End of day 1 and it’s already looking like the hardware build is complete . Still some electronics and installation to go I think.
Kudos to the summer student, OpenFlexure, RodeoStat, and Open Science Shop for making this a seamless experience so far.
Building and setting up the Open Flexure microscope with the IO Rodeo kit was straightforward without major difficulties. There weren’t any missing parts and almost everything needed including tools was in the kit. There was even preloaded software on all of the hardware that required software upload which was very helpful. The instructions were clear, especially with the help of IO Rodeo’s complementary instructions. The build process was mainly physical. The assembly of anything electronic was done at the end. The main challenge is figuring out if it was done without mistakes. There isn’t much feedback until you test it for the first time. The build and setup for me took 2 days or around 13 hours.
The challenging steps I remember are steps 3, 5, and 11. Step 3, “Prepare the main body”, is the only step that requires external tools (listed in instructions, flush cutters for example). Once you get those it is quite straightforward. Step 5, “Assemble the actuators”, has many confusing and difficult steps and instructions. This page, which I just found and didn’t know about while making it, seems to be quite helpful for this step as it outlines it clearly, however, it is made for an earlier version of the microscope. The step where you put the light oil on the actuators is quite unclear. It is also unclear how the actuators are supposed to look/work. After getting further into the build process I understood that the actuators tightened and loosened the parts to move around the stand, but it would be great if that was explained earlier in the instructions to clear more doubt about how the actuators were supposed to look. Attaching the feet to the actuators requires a good amount of physical force which I didn’t know, so the first one took a bit to attach. Step 11, “Complete the wiring”, was quite difficult because you need to work with electronics and the instructions link for them on IO Rodeo doesn’t work from the other assembly instructions page. Here is the working link. Also, the instructions never say to insert the small SD card into the Raspberry Pi. There were also many minor challenges I faced throughout the process.
I learned a lot while assembling the Open Flexure microscope. I learned how to set up a Raspberry Pi as I have never worked with one before. I learned that mechanisms with bendy parts can still be quite precise. I also learned how to assemble this type of kit build with instructions. Overall, I learned a good amount, the build process was generally very intuitive, and I would be happy to build more from Open Flexure in the future.
Here are some pictures it took, some pictures of the final product, a video of it calibrating, and a video of 40x zoom on a piece of plastic
IMG_0911.MOV - Google Drive
One of the next pieces will be to work on image stitching with programmatic control via Python using the following OpenFlexure package:
possibly using Fiji’s Image Stitching extension:
Some related resources:
- OpenFlexure Programming Clients
- Stitching images - #12 by dgrosen - Request Help - OpenFlexure Forum
- Stacking and scanning what software - General - OpenFlexure Forum
- https://forum.image.sc/ (forum that is all things scientific imaging/microscopy)
- ImageJ and Python, specifically GitHub - imagej/pyimagej: Use ImageJ from Python
- Stitching OFM images (YouTube, 11 min)
- image-stitching · GitHub Topics · GitHub (e.g., Meshroom, OpenStitching, but mostly geared towards panoramas and not necessarily microscopy images)
- stitching_tutorial/docs/Stitching Tutorial.md at master · OpenStitching/stitching_tutorial · GitHub
- Image Composite Editor (ICE) by Microsoft (retired, no longer supported), and an unofficial CLI interface to ICE
- OpenCV: Images stitching
- Microscope image stitching package in Python - Announcements - Image.sc Forum
- ImageJ plugin in Python? - #9 by karlduderstadt - Development - Image.sc Forum
- Microscopy-specific packages
- GitHub - labsyspharm/ashlar: ASHLAR: Alignment by Simultaneous Harmonization of Layer/Adjacency Registration (CLI seems fine, Python API not documented, some nice example images, somewhat active maintenance)
- GitHub - yfukai/m2stitch: MIST-inspired microscope image stitching package (minimal instructions/docs, Python API, seems inactive)
- GitHub - usnistgov/MIST: Microscopy Image Stitching Tool (Java and MATLAB only, seems inactive)
- Pycroscopy — pycroscopy 0.63.3 documentation (Scientific analysis of nanoscale materials imaging data)
- https://micro-manager.org/ (Microscope control and image acquisition integrated with ImageJ - lots of discussion about writing an OpenFlexure integration, but writing C++ drivers seems to be the main blocker, see [1], [2], [3])
Well done!
I also stumbled across this post from the last couple days: First Build-High Resolution v7 obtained from IO Rodeo - Build Reports - OpenFlexure Forum
Sounds like there may have been some similar issues.
EDIT: I also found a YouTube video with assembly instructions. Haven’t tried these, but seems nice!
There were a few problems with sending commands through Python to the microscope, but it was just the outdated operating system that caused them. I wrote a quick program to stitch images together with manually entered distances to estimate the camera moving over by around a frame.
I am currently having a lot of trouble downloading the other image stitching program, and connecting the microscope to the internet.
Here are some of the outputted images
Here is my code because it doesn’t let me upload a .py file
import openflexure_microscope_client
m = openflexure_microscope_client.find_first_microscope()
from PIL import Image
p = m.position
x=int(input("how many frames in the x direction"))
y=int(input("how many frames in the y direction"))
focus = int(input("do you want it to autofocus every time (generally better images less precise alignment)"))
imagearray=[]
for i in range(x):
imagearray.append([])
xframe = 3850
yframe = 2900
p["y"] -= (y+1)/2*yframe
p["x"] -= (x-1)/2*xframe
for i in range(x):
for j in range(y):
p['y'] += yframe
m.move(p)
if focus == 1:
m.autofocus()
im = m.capture_image()
imagearray[i].append(im)
p["y"] -= yframe*y
p["x"] += xframe
verticalbars = []
for i in imagearray:
images = []
for x in i:
images.append(x)
widths, heights = zip(*(x.size for x in images))
width = widths[0]
height = heights[0]
heightt = sum(heights)
verticalbar = Image.new('RGB', (width, heightt))
placeheight = (len(images)-1)*height
for i2 in images:
verticalbar.paste(i2, (0, placeheight))
placeheight -= height
verticalbars.append(verticalbar)
widths, heights = zip(*(x.size for x in verticalbars))
width = widths[0]
height = heights[0]
widtht = sum(widths)
finalimage = Image.new('RGB', (widtht, height))
placewidth = (len(verticalbars)-1)*width
for i2 in verticalbars:
finalimage.paste(i2, (placewidth, 0))
placewidth -= width
finalimage.show()
Openflexure Stitching was by far the most frustrating thing to do in this project. That might be because of my limited experience with the command line. In retrospect it was quite simple and I just needed to follow the commands line by line and install Libvips in the correct location, but I struggled for about 4 or 5 hours trying to get it to work (my fault entirely). The stitch that the Openflexure stitching program made was far better than my primitive code. I also got the Fiji plugin stitching working because I was so frustrated with the Openflexure stitching. The stitches from both programs are almost indistinguishable.
After figuring out how to download the stitching program, to make the process require even less human input, I made a scanning program that automatically stitches the scan and moves it to another file. The scan is taken between two corners of a rectangular area. There are comments in the code explaining the inputs and everything. This process was quite fun. I had some trouble with metadata before I learned that Openflexure Stitching requires the use of scanning metadata using the official scanning function (which the documentation for is quite confusing). After I got that working, I still received errors. I thought it was still the metadata, but apparently it was just some errors with the amount of overlap I gave it. We also had some trouble with running it without the use of os.system and instead directly running the function, but we couldn’t figure it out (Recomendations for load_tile_and_stitch() (#17) · Issues · OpenFlexure / openflexure-stitching · GitLab). Overall, a very fun process, I liked it a lot more than the downloading of the Openflexure Stitching program. Here is the code!
import openflexure_microscope_client
import openflexure_stitching.loading
import openflexure_stitching.pipeline
import openflexure_stitching.stitching
m = openflexure_microscope_client.find_first_microscope()
import openflexure_stitching
import PIL
import os
import math
import shutil
#WARNING the program will delete saved images on the microscope!
#requirements:
#openflexure stitching (properly installed and working in terminal)
#openflexure microscope running the open flexure connect software
#according to pyclient doncumentation, the network needs to be simple. ethernet cable connecting both devices works fine
#os, shutil, openflexure_microscope_client
#how to use the function and what each input should be
#"space separated coordinates of first corner (or in a list) "
#"space separated coordinates of second corner (or in a list) "
#"around how much overlap (in stage coordinates) do you want in image locations? "
#"focus amount/range in z coordinates, 0 to disable focus "
#"output directory (with the file name at the end) "
#the program will scan between the two specified corners
#the overlap is not 100% what you set it to be, it tries to be as close as possible (only in the above direction) while still filling the area
#the program will not work unless you have a high enough overlap value
#WARNING the program will delete saved images on the microscope! transfer those first
def scanandstitch(c1, c2, ov, foc, dir):
m.autofocus()
#refresh the directory
if os.path.isdir('Downloads/scanimages'):
shutil.rmtree('Downloads/scanimages')
os.mkdir('Downloads/scanimages')
#calculate step count, step length, and set up variables
p = m.position
if isinstance(c1, str):
x1, y1 = map(int, c1.split())
x2, y2 = map(int, c2.split())
else:
x1 = c1[0]
y1 = c1[1]
x2 = c2[0]
y2 = c2[1]
xd = abs(x1-x2)
yd = abs(y1-y2)
essx = 3600-ov
essy = 2700-ov
ssx = xd/math.ceil(xd/essx)
ssy = yd/math.ceil(yd/essy)
xsc = math.ceil(xd/essx)
ysc = math.ceil(yd/essy)
#move to starting position
p['x']=min(x1,x2)
p['y']=min(y1,y2)
m.move(p)
#scan and download images
m.scan(params={'grid':[xsc,ysc,1],'stride_size':[ssx,ssy,0],'autofocus_dz':foc,'filename':'SCAN',"bayer":True},wait_on_task=True)
for i in m.list_capture_ids():
m.download_from_id(i,"Downloads/scanimages")
m.delete_image(i)
#stitch then move final file
abspath = os.path.abspath("Downloads/scanimages")
os.system('cd openflexure-stitching && python -m venv .venv && .\\.venv\\Scripts\\activate && openflexure-stitch '+abspath) #very tricky to run from python (we gave up) https://gitlab.com/openflexure/openflexure-stitching/-/issues/17 https://gitlab.com/beniroquai/openflexure-stitching/-/blob/main/src/openflexure_stitching/pipeline.py?ref_type=heads#L31
os.replace("Downloads/scanimages/scanimages_stitched.jpg", str(dir))
scanandstitch("2000 2000", "-2000 -2000", 1500, 200, "c:/Users/kenzo/Downloads/stitchedimages/stitch1.jpg")
Video
Some additional things we may consider implementing:
Automatic upload of images to an online storage platform like Amazon S3 or imgur and then uploading sample and acquisition metadata to MongoDB with the image URI
CAPTCHA and email verification based temporary HiveMQ username/password combinations.
I finally got the microscope connected to the eduroam, WPA 2 enterprise, internet. I had many problems with this step and I honestly don’t really know what fixed it. In the end some of my final problems were not knowing where the network manager was in the bar at the top of the screen, and not realizing I needed to change the type of the internet to only require a username and password. I also got the heightmap extension working and used it to set the new (0,0) point in the center of the range of motion. The heightmap extension was very easy to get working and probably the easiest thing about this project so far.
height map: Joe Knapper / height_map · GitLab
I have made another one of these microscope kits. It took be about 3-4 hours, and was just as straightforward as before. Generally an enjoyable experience. I have also set up all four microscopes on the internet and I am currently working on making the MQTT system.
I have finished the MQTT communication system and also livestreaming of the camera output to YouTube. You can now control the microscope from anywhere in the world. I am currently working on implementing stitching on the Raspberry Pi, so that it can be implemented with MQTT. I finished the code for this, but I am having a large amount of trouble getting OpenCV to work on the Raspberry Pi on Python3.10. I am also working on making a system for temporary credential generation.
Here is how the MQTT system works