Open-source tool for real-time and automated analysis of droplet-based microfluidic

Droplet-based microfluidic technology is a powerful tool for generating large numbers of monodispersed nanoliter-sized droplets for ultra-high throughput screening of molecules or single cells. Yet further progress in the development of methods for the real-time detection and measurement of passing droplets is needed for achieving fully automated systems and ultimately scalability. Existing droplet monitoring technologies are either difficult to implement by non-experts or require complex experimentation setups. Moreover, commercially available monitoring equipment is expensive and therefore limited to a few laboratories worldwide. In this work, we validated for the first time an easy-to-use, open-source Bonsai visual programming language to accurately measure in real-time droplets generated in a microfluidic device. With this method, droplets are found and characterized from bright-field images with high processing speed. We used off-the-shelf components to achieve an optical system that allows sensitive image-based, label-free, and cost-effective monitoring. As a test of its use we present the results, in terms of droplet radius, circulation speed and production frequency, of our method and compared its performance with that of the widely-used ImageJ software. Moreover, we show that similar results are obtained regardless of the degree of expertise. Finally, our goal is to provide a robust, simple to integrate, and user-friendly tool for monitoring droplets, capable of helping researchers to get started in the laboratory immediately, even without programming experience, enabling analysis and reporting of droplet data in real-time and closed-loop experiments.

Note: The 'Image Acquisition' node needs to be defined before starting the workflow. The 'Feature Extraction' and 'Droplet Analysis' are nodes with externalized properties, allowing for the dynamic control of several parameters, while the workflow is running. Moreover, by double-clicking the 'Droplet Analysis' node, the visualization of the data at different stages of processing can be launched at any time in parallel with the execution of the dataflow.

1) 'Image acquisition'
In more detail, the node 'Image acquisition' enables: i) the visualization and acquisition of real-time video data or ii) the visualization of offline video (i.e., pre-recorded video). Therefore, this workflow can be used online to extract measures and make real-time decisions on the experimental conditions, or to analyse video data offline. If users double-click on the 'Image acquisition' node, a new workflow panel will open and it is possible to enable or disable the node 'FromFile' or 'FromCamera' by right-clicking on the respective node. Please note that users must make sure that the workflow is stopped to select the intended option. If the node 'FromFile' is enabled ( Supplementary Figure 2A), it will display a pre-recorded video. On 'Properties', clicking on 'FileNameToOpen' allows users to indicate the video directory (the location where the video is saved on the computer). The video file should be avi format (.avi file extension). If the node 'FromCamera' is enabled (Supplementary Figure  2B), it will display the video being recorded. To save the video, users must choose a folder in the 'Properties', as well as the 'FrameRateToSave', and select 'True' for the option 'SaveVideo'. The video should be saved in avi format, such as 'MyVideo.avi'. After, the user can initiate the workflow by clicking 'Start' and the output of the 'Image Acquisition' node will appear in the pipeline as 'Frames'.
The node 'Feature extraction' enables the segmentation, detection and extraction of direct measurements related to the droplets. As shown in Supplementary Figure 3A, if the user clicks on the node 'Feature extraction', the 'Properties panel' will show three parameters that must be set: i) 'FrameRateToAnalysis', ii) 'RegionOfInterest', and iii) 'Threshold'. First, the correct frame rate of the video analyzed must be selected (i.e., camera/video frame rate). Also, the users need to define a region of interest (ROI) around the microfluidic channel. On the other hand, to understand how this node works, by double-clicking on the node, two important group nodes will appear: the node 'Regionofinterest', which is used to define the ROI, and the 'DropletProcessing' node, which is responsible for the segmentation, detection and direct measurements of droplets. The 'Regionofinterest' crops a rectangular sub-region of each image and converts the image into grayscale, so that a threshold can be applied to create a binary image, depending on the pixel intensity (please see Video Tutorial: Read, crop and save a video file). Users should select their intended ROI (ideally, the region of interest should include droplets and, if possible, avoid adding walls) by clicking on the first corner of the crop rectangle and dragging down to the last corner of the rectangle. Additionally, inside 'DropletProcessing' (Supplementary Figure 3B), the node 'Segmentation' is used to isolate the droplets by using the 'Threshold' value for the binarization of the image: all pixels above the threshold become black, and all pixels below the threshold become white (Supplementary Figure 3C). Once more, to visualize what is inside each node, users should make sure that the workflow is stopped, and then double-click on the node. By adjusting the 'Threshold' value on the 'Properties' panel, the border of the droplets may be tuned for the following step of detection.

Supplementary Figure 3. 'Feature Extraction' group node. (A) If the node 'FeatureExtraction'
is selected, then the 'Properties' panel will display all the configuration properties which are available for this node. Most properties can be configured simply by changing the text value in the corresponding row of the property grid, (B) 'DropletProcessing' node (that is within the 'Feature Extraction' node) where the users can find three group nodes: 'Segmentation', 'DistanceBetweenDroplets' and 'AverageMotion', which unfold as illustrated on (C).
Once the users find appropriate threshold levels for droplets segmentation, droplet detection and droplet measurements can be performed. For the calculation of droplet radius, speed and frequency we will use three measures: i) droplets major axis length, ii) distance between droplets centroid and iii) the average motion of droplets pixels per frame. These values will be computed within the group nodes 'DistanceBetweenDroplets' and 'AverageMotion' (Supplementary Figure 3B, C). For the droplets major axis length and distance between droplets centroid, firstly the 'FindContours' node will trace the contours of all the objects in a black-and-white image. An object is defined as a region of connected white pixels. Then, the operator 'BinaryRegionAnalysis' computes image moments to describe droplets after segmentation. Simple properties of the image which are found via image moments include major and minor axis length, area, center of mass, and orientation for all the detected contours per frame 1 . For instance, the major axis (1) and centroid (2) are calculated using imaging raw (M00, M10, M01) and central moments (Mu20, Mu02, Mu11) (please check code here). ℎ = 2.75 * ( 20 00 ) + ( 02 00 ) + ( 2 * ( 11 00 ) * 2 * ( 11 00 ) + (( 20 00 ) -( 02 00 ) * ( 20 00 ) -( 02 00 ))) (1) Moreover, the distance in pixels, between the centroid of consecutive droplets is then computed using the 'PhytonTransform' node and saved as 'DistanceBetweenDroplets'. Basically the code encompasses the calculation of the distance (3) between two points in a 2dimensional space. ( We observed that droplets entering and exiting (Droplet [0] and Droplet [n]) the frame present a 'instable' centroid, so we decided to use the Droplet [1] and [2] in each frame to calculate the distance. Bellow, an example of 'BinaryRegionAnalysis' output from video 11, where the red dot represents the centroid and the red axis the major axis, and also the code used to calculate the distance between droplet's centroid in position 1 and 2. Moreover, for the average motion of droplet pixels the 'Opticalflow' node (here) computes a dense optical flow through Gunnar Farnerback's algorithm 2 which detects the pixel intensity changes between two consecutive frames. It identifies the motion of objects between every two consecutive frames, providing the flow vectors of all pixels from 'RoiImage' (represented as 'Source1'). As shown in Supplementary Figure 3C this node is connected to the 'Magnitude' node, which computes the magnitude (distance covered during pixel motion) from all flow vectors. The vector magnitude is then zipped together with the 'Segmentation' node (represented as 'Source2'), which provides the calculation of the motion of all droplet pixels per frame as 'AverageMotion'. To know more about the 'Zip' node look here.

3) 'Droplet Analysis'
Finally, the node 'Droplet Analysis' computes the radius (µm), production rate (droplets/s) and speed (µm/s) of passing droplets and it is responsible for the visualization and saving of the data. If users click on the node, they will see on 'Properties' panel the possibility of saving the results in a .csv file (Supplementary Figure 4A). To save the file, users need to click on 'FileName' to choose a folder to save the data and select 'True' for the option 'SaveData'. The results (radius, production rate and speed) should be saved in csv format, such as 'results.csv'. Moreover, users need to define the 'Microns'/'Pixels' aspect ratio by introducing values of channel width (200 µm in this case) in µm and its correspondence in pixels. Additionally, it is possible to reduce the susceptibility of the signal to fast changes, particularly in the occurrence of fast vibrations by introducing 'FramesToAverage'. The larger the number of frames used to calculate the moving average, the more fluctuation/instability smoothing occurs, as more droplets are included in each calculated average. Moreover, by doubleclicking on 'Droplet Analysis', users will find the main nodes employed to calculate radius (µm), production rate (droplets/s) and speed (µm/s) (Supplementary Figure 4B). Droplet radius (4) is calculated within the node 'DropletRadius(micron)', where the major axis length from all the droplets per frame (node 'Major Axis Length') is rescaled (node 'Rescale') for µm and then divide by 2 (node 'Divide') (Supplementary Figure 4C).

Supplementary
Concerning droplet speed (5), by knowing the values of 'Average Motion' (average motion of all droplet's pixels per frame as 'Source1') and 'FrameRate', it is possible to calculate the droplet speed, as shown in Supplementary Figure 4D, with the node 'Droplet Speed (micron/s)'. The node 'Rescale' allows the conversion from pixel/frame to µm/frame and the node 'Multiply' enables the conversion from µm/frame to µm/s. Figure 4E, the droplet production rate (6) per frame is determined as the distance between the centroid of consecutive droplets divided by their speed in the node 'Frequency (droplets/s)'. The 'DistanceBetweenDroplets' and 'AverageMotion' are zipped together, providing the values for calculating the frequency of droplets per frame. Finally, the 'Frequency (droplets/s)' node in Supplementary Figure 4D computes the production rate (droplets/s) according to the 'FrameRate' value using the node 'ExpressionTransform'.

Lastly in Supplementary
Finally, the node 'Droplet Analysis' allows the visualization (Supplementary Figure 4F)    Experiment where droplets are close to each other (oil and water flow of 5 and 4 µl/min, respectively). Droplets with (B) small radius approximately 20 µm and (C) with high generation rate ≈200 droplets/s (oil and water flow of 20 and 4 µl/min, respectively). The blue bars correspond to a video where the camera frame rate was set to ≈200 fps and the orange bars correspond to a video where the camera frame rate was set to ≈600 fps.