High Speed Rotation Estimation with Dynamic Vision Sensors
Abstract.
Rotational speed is one of the important metrics to be measured for calibrating the electric motors in manufacturing, monitoring engine during car repairing, faults detection on electrical appliance and etc. However, existing measurement techniques either require prohibitive hardware (e.g., high-speed camera) or are inconvenient to use in real-world application scenarios. In this paper, we propose, EV-Tach, an event-based tachometer via efficient dynamic vision sensing on mobile devices. EV-Tach is designed as a high-fidelity and convenient tachometer by introducing dynamic vision sensor as a new sensing modality to capture the high-speed rotation precisely under various real-world scenarios. By designing a series of signal processing algorithms bespoke for dynamic vision sensing on mobile devices, EV-Tach is able to extract the rotational speed accurately from the event stream produced by dynamic vision sensing on rotary targets. According to our extensive evaluations, the Relative Mean Absolute Error (RMAE) of EV-Tach is as low as which is comparable to the state-of-the-art laser tachometer under fixed measurement mode. Moreover, EV-Tach is robust to subtle movement of user’s hand, therefore, can be used as a handheld device, where the laser tachometer fails to produce reasonable results.
1. Introduction
Machines and devices with rotary components are pervasive in our daily life and play significant roles in various industrial fields, such as energy, aviation, automobile and home appliance. In manufacturing, rotational speed is one of the key indicators to reflect the current working state of the machines, therefore, there is a huge demand for measuring the rotational speed with an accurate and convenient tool. In the field of appliance repairing, repairmen normally use tachometer (an instrument to measure the rotational speed) to measure the rotational speed of the electrical motors of the appliance, such as the condensing unit of air-conditioners and washing machines, to infer possible faults from the irregular rotational speed. In the automotive maintenance, the checking of rotational speed of the wheels has become a standard item in the annual vehicle inspection manuals (Haynes, 2022). Real-time measurement of rotational speed is also useful for predicting the actions of flying drones because they change their flying direction and speed by adjusting the rotational speed of one or multiple propellers; the actions should be always after the change of rotational speed due to inertial effect. At last, rotational speed calibration of some devices and equipment, such as drones, watermeters and car engines, is another typical scenario in which the rotational speed need to be calibrated precisely to ensure the devices are functioning as expected.
A number of different measurement approaches have been proposed to obtain the rotational speed of different targets under different circumstances. They are different from the requirement of physical contact (contact or non-contact) or sensing modalities (electromagnetic, laser or vision). Mechanical tachometers (directindustry, 2022) are a type of traditional devices to measure rotational speed of large machines via physical connection to the shaft of the targets. Electrostatic (Li et al., 2019), hall-effect and optical encoder tachometers are non-contact but they must be placed in proximity to measure the rotation of the extra hardware mounted on the shafts of the targets. All the above approaches are invasive as the physical contact or extra hardware may place significant influence on the natural rotating. Laser tachometers (UNI-T, 2022; Zhai et al., 2019) make a step forward to more accurate and convenient measurement on rotational speed. The laser tachometer enables highly accurate (the error rate is below ) and low invasive measurement and can be used in reasonable working distance. Therefore, the laser tachometer has become the mainstream instrument for rotational speed measurement. However, the requirement of reflective labels on the targets still limits the application of laser tachometers under some circumstances as attaching labels may not be convenient or even impossible for some devices. Then, most importantly, although the laser tachometers are built as portable devices, it is difficult for the users to point to the extremely small label on the rotating target with the laser tachometer in hands and the accuracy in handheld mode degrades significantly according to our evaluation in Section 4.2. Vision-based approaches (Wang et al., 2017; Zhong et al., 2018; Zhu and Yu, 2011; Natili et al., 2020; Wang et al., 2018; Zhao et al., 2018; Kim et al., 2016) require no extra hardware on the rotating targets and can further extend the working distance with zoom lens. They also show strong environmental adaptability and robustness. However, for vision-based approaches with CCD/CMOS (Wang et al., 2017; Zhu and Yu, 2011; Natili et al., 2020; Wang et al., 2018; Zhao et al., 2018), the range of rotational speed to be measured is limited by the frame rate, which is normally between 30-50 fps (frames per second) and the accuracy is not good enough (error rate is over ) for high-precision measurement. As shown on the left of Figure. 1, high-speed rotation of drone’s propellers will place significant motion blur in the recorded video of normal RGB cameras and causes failure of measurement. High-speed cameras with frame rate of few hundreds and even thousands can cover larger range of rotational speed. However, high-speed cameras are highly resource-consuming which is too prohibitive for processing on mobile devices with embedded CPUs.

To solve the above issues, we introduce Dynamic Vision Sensing (DVS), a new sensing modality based on an emerging vision platform called event camera (Gallego et al., 2022), to capture high speed rotation without motion blur. Figure. 1 presents the rotating propellers of a drone landing on the floor. The left figure is an RGB frame from a video recording in a frame rate of 60 fps by an off-the-shelf smartphone camera and the right figure shows accumulative outputs of two 2 ms event streams from DVS in red and green respectively. From the figures we can observe the rotating propellers are severely blurred in the RGB frame while the shape of the propellers are well preserved and the rotation between the two slices can be easily identified in DVS outputs. In this paper, a series of bespoke algorithms are proposed to process the DVS outputs and calculate the rotational speed accordingly and contributions can be summarized as follows:
-
•
We first introduce DVS as a new type of vision modality to mobile sensing and demonstrate its privilege in capturing high speed rotation.
-
•
An event-based tachometer, EV-Tach, is proposed via efficient high-fidelity rotary motion sensing with DVS to estimate high-speed rotating targets. A number of algorithms are designed to process event streams for rotating objects extraction, and rotational speed estimation.
-
•
We conduct extensive evaluations on the accuracy of EV-Tach and robustness to different rotational speeds, working distances and subtle movement. According to the results, EV-Tach achieves comparable accuracy to laser tachometer for fixed setting. The robustness of EV-Tach to subtle movement shows it can be used in handheld mode and measure the unstable rotating targets where the laser tachometer does not work.
The rest of the paper is organized as follows. Section 2 provides related work on rotary motion measurement and DVS. Then we overview the system design in Section 3 including the algorithms of event stream processing and rotational speed estimation. Extensive evaluations are conducted and results are presented in Section 4. Finally, we discuss the advantages and limitations of EV-Tach, and conclude the whole paper in Section 6.
2. Related Work
In this section, we will review the work related to rotary motion sensing and dynamic vision sensor.
Traditional Rotary Motion Sensing. Mechanical tachometers (directindustry, 2022) are physically connected to and rotate with the shaft of the target to measure the rotational speed. However, the physical contact constrains the working distance and causes inaccurate measurement due to the mass and friction of the mechanical tachometers. Electrostatic (Li et al., 2019) and hall-effect (hall) sensors detect the change of electromagnetic field caused by shaft-bearing fixed on the target and the frequency of the change was estimated as the rotational speed. Optical encoder tachometers (OpticalEncoderDisc) relied on a photoelectric sensor to detect light through the disc of encoder placed between the LED light source and photoelectric sensor. An encoder is a disc mounted on the shaft of the rotating target with opaque and transparent segments so that rotational speed can be estimated based on the pattern of the light. The electrostatic and optical encoder tachometers can be regarded as non-contact but they must be placed in proximity to measure the rotation of extra hardware attached on the shaft of target. Laser tachometer (UNI-T, 2022; Zhai et al., 2019) measures the rotational speed by detecting the small and lightweight reflective labels attached on the surface of the target. However, the use of reflective labels may cause inconvenience during measurement especially for handheld scenarios.
Vision-based Rotational Speed Estimation. There have been a number of non-contact approaches being proposed for rotational speed estimation. Wang at el. (Wang et al., 2017) calculated the structural similarity and two-dimensional correlation between the consecutive frames, and then the similarity-related parameters were used to reconstruct a continuous and periodic signal of time-series. Fast Fourier transformation was applied to calculate the period of the signal which was used to infer the average speed of rotation. Other approach (Wang et al., 2018) was also proposed to utilize the periodical change of similarity between frames and the difference was Chirp-Z transform and the parabolic interpolation based auto-correlation were applied to estimate the period in other domain. To improve the accuracy and range of measurement, Natali at el. (Natili et al., 2020) obtained the coefficients sequence of correlation between the reference and each of the following frames. Then the rotational speed could be calculated through the short-time Fourier transform (STFT), which enabled more accurate measurement of the rotational speed of non-stationary and disturbing systems. Instead of directly calculating the complete period of the rotation, there are some works to obtain the rotational speed by calculating the instantaneous angular speed (IAS). Zhu at el. (Zhu and Yu, 2011) extracted two adjacent frames from the rotational video of the objects, then the Hough transform was applied to detect straight lines, and the angular changes of these lines could be calculated. Since the interval time of the two frames was known, the angular velocity of the object could be easily obtained. However, these methods above were limited by the frame rate of the conventional RGB cameras and can only accommodate the rotational speed less than rpm and the accuracy is far from our approach: the error rate is over which is about times worse than our proposed EV-Tach. In order to obtain a larger measurement range of rotational speed, some researchers used high-speed cameras (Kim et al., 2016; Zhong et al., 2018) to measure the instantaneous angular speed of rotating object. However, the cost of the high-speed cameras is prohibitive for embedded platforms and both of these methods required special-style markers attached on the rotating targets.
Dynamic Vision Sensors. In this paper, we define the dynamic vision sensing as a type of vision sensing modality based on event-camera (Gallego et al., 2022). The event-camera is bio-inspired and its pixels work independently to detect the change of intensity. Unlike the frame-based RGB cameras, the output of event-camera consists of nonstructural and discrete event points in spatial-temporal domain and is termed as event stream. Processing of event streams is a new topic to study. To facilitate existing methods, event streams were converted to other familiar formats, including images, graphs and 3D pointclouds. For examples, image-like representations of event streams were introduced by accumulating the event points for each pixel overtime and corresponding methods were proposed for gesture recognition (Amir et al., 2017), gait recognition (Wang et al., 2019a) and estimating optical flow of event streams (Zhu et al., 2018). However, the image-like representation ignored the temporal information of event stream. Graph-based representations were proposed to preserve the spatial-temporal information of event streams. 2D-Graphs (Bi et al., 2019) or 3D-Graphs (Wang et al., 2021) were built by selecting and connecting event points via nearest neighbor search, then graph-based convolutions were applied to extract higher-level information. The spatial-temporal event streams could also be processed as 3D pointclouds then the PointNet (Qi et al., 2017a) and PointNet++ (Qi et al., 2017b) were applied, e.g., for gesture recognition (Wang et al., 2019b).
Most relevant work. In (Benosman et al., 2013), the authors proposed a method to calculate the optical flow of moving object in event stream and showed its application on estimating rotational speed of a plate with single straight line to simulate a blade. Though it also utilized the DVS as sensing modality for rotational speed estimation, the design of algorithm was not sophisticated enough to obtain accurate measurement on high-speed rotation: the method could only provide reasonable measurement for rotational speed less than rpm. Gallego at el. (Gallego and Scaramuzza, 2017) proposed an approach to estimate the rotation of event-camera by processing the event streams. They applied contrast-maximizing edge alignment algorithm to estimate the angular velocity which was relevant to our work. However, it measured the rotation of the event-camera itself, which was fundamentally different from the goal of our work, meanwhile it produced significantly lower measurement accuracy (the error rate is around ) and sensing range (less than rpm) than our approach.
3. System Design

In this paper, we propose an event-based rotational speed measurement system, EV-Tach. The overall system design of EV-Tach is presented in Fig. 2. It starts with recording a short-period of event stream. Then a series of event stream processing algorithms are proposed to extract multiple rotating objects from the event stream, remove outlier events to improve the data quality. At last, we propose an ICP-based registration method to estimate the transformation between two slices of event streams and calculate the rotational speed of the target.
3.1. Event Stream Processing
In this section, we will describe the event stream processing algorithms in details. The algorithms aim to extract high quality and low-dimensional event stream with single rotating target for the rotational speed estimation in next section.
3.1.1. Dynamic Vision Sensors
To make the paper self-contained, we will brief the principal background of event cameras before we describe the event stream processing algorithms in details. Dynamic vision sensors, or event cameras, are bio-inspired visual sensors developed to mimic the imaging principles of the biological retina. And in recent years, the event cameras have been widely applied in a variety of computer vision tasks, such as super resolution, image deblurring, gesture recognition, etc. Unlike traditional RGB cameras, event cameras do not produce synchronous video frames at fixed rate, but asynchronous event streams. Specifically, pixels of the event camera work independently, to detect the change of the intensity of the scene as,
(1) |
where is the intensity value of pixel at time . When the change of intensity at the pixel is over the threshold , an event will be released immediately. An event stream is a collection of events overtime and is represented as a stream of quadruplet . When the event corresponds to a positive change, the polarity is otherwise it is . Compared with traditional RGB cameras, event cameras posses a number of unique characteristics. As an event is launched as soon as a change is detected without global synchronization, the event streams are high in temporal resolution and low in response latency (in the order of microseconds). Event cameras save sensing energy and bandwidth as they produce events only when changes are detected. The high dynamic range (140 dB vs. 60 dB of traditional RGB cameras) enables them work greatly under challenging lighting conditions. These characteristics make event cameras have great potential for high-speed motion capture and working on resource-constrained devices.
3.1.2. Rotating Objects Extraction
One of the important merits of EV-Tach over electromagnetic and laser tachometer is its capability on sensing multiple rotating targets simultaneously. For example, in drones manufacturing, the four independent electric-motors can be calibrated without changing the settings of the measurement. To estimate the rotational speed of multiple targets, we propose a K-means-based rotating objects extraction algorithm to isolate the events belonging to different rotating targets for further processing.
Heatmap-based Stream-centroids Initialization: K-means is widely used clustering algorithm in euclidean space. It literately merges the points to the nearest clusters and update the centroids accordingly until it converges. For rotary motion sensing, the resultant centroids are the locations of the rotation axes of all rotating targets. The computational complexity of K-means is low and can run in-situ on resource-constrained platforms. However, it suffers from instability and sensitivity to the initial location of centroids (Zhang et al., 2008): the poor choice of initial centroids may lead the algorithm fall into local optimal and result in incorrect clusters.


Considering the characteristics of event streams produced by rotating targets, we propose a lightweight stream-centroids initialization method based on the heatmap of accumulated events to enable reliable rotating objects extraction. After caching a fixed-length of event stream, e.g., 150ms, the number of events on each pixel are accumulated into a heatmap. The size of the grid in our setting is pixels. Figure 3(a) presents the locations of the initial centroids on the heatmap of an event stream collected from a four-motor drone. From the heatmap, we can observe, more events are generated near the center of the rotating target. To localize the initial centroids, we find the grid with the highest value (denoted as ) in the heatmap as the initial centroid of the first cluster. Then the remaining centroids are chosen by finding the farthest distributed grid whose value larger than ( in our evaluations and experiments), i.e., the second initial centroid is chosen as the farthest grid with over events, the third one is chosen as the grid with largest average distance to the first two centroids and so on. Figure 3(a) shows an example of the four initial centroids chosen by our method. The principal behind this strategy is to maximize the inter-cluster distance to avoid the local optimal of K-means algorithm. Figure 3(b) demonstrates the clustering results of the event stream generated by a four-propeller drone, where the four rotating objects are separated correctly.
Spatial Clustering on Event Streams: After the centroids initialization, the event stream should be segmented into multiple clusters corresponding to each individual rotating target. In each iteration, K-means-based clustering is applied to associate each event to one of the clusters with the nearest centroid in euclidean space and then update the centroid of each cluster by:
(2) |
where is the updated centroid location, is the total number of events and is the location of the event . The events clustering and centroids updating procedures are executed alternatively until the location of the centroids remain (almost) the same, which indicates a stable clustering result has been achieved.
Choice of : As the number of rotating objects in an event stream can be various, the choice of is of importance for the K-means clustering. To determine without the prior knowledge, we apply Davies-Bouldin Index (DBI) (Davies and Bouldin, 1979) to evaluate the quality of the clustering. Specifically, we assume the candidate values of are . When , a collection of clusters are obtained from K-means clustering. Then the DBIs of the clustering results with different value of are calculated and the minimal DBI is desired to maximize the inter-cluster distance and minimize the intra-cluster distance. To calculate DBI when , we need first estimate the dispersion of each cluster and the separation between any of the two clusters. The dispersion of cluster when is
(3) |
where is the euclidean distance between any event and the centroid of the cluster. Then the separation between cluster and in is:
(4) |
With the dispersion and separation, we can obtain the similarity between the two clusters and cluster ,
(5) |
Then the similarity between and the whole collection is defined as maximum similarity between and any other cluster from the collection:
(6) |
Then DBI of the clustering result when can be expressed as,
(7) |
Finally, the value of bringing the smallest DBI is chosen and the corresponding clustering extracts multiple rotating targets from the event stream.

Outliers Removal: Except for the rotating targets, the subtle movement of users (in handheld measurement) and vibration of the host devices will also cause noticeable events in DVS and these events are regarded as outliers. Figure 4 presents the accumulated event stream in pixel domain recorded by a user holding an event-camera in front of a rotating target (the ring in red). Due to the subtle movement of user’s hand, the outliers of the hosting device and edges in background (in blue) are also detected in DVS. As the rotating target causes significantly larger density of events than the subtle movement and the outliers are normally far from the centroid. Most of the valid events should concentrate around the center of rotation. To remove the outliers, we first estimate the median distance of the events to the centroid,
(8) |
Then we set distance over three times of the as the threshold and the events with distance over the threshold are marked as outliers.
Angle of Rotational Symmetry:
For each identified rotating object, to estimate its rotational speed, we first need to track the amount of rotational motion within appropriate time frame. In many real-world applications, many rotating objects are of centrosymmetric shapes, such as propellers of drone, fans of condensing unit and wheels of automobile etc. To estimate rotational motion of these objects, we identify certain features on the rotating objects, e.g. the blades of the propellers/fans, spokes of the wheels, and track the motion of those features (i.e. angle of rotation) as a proxy for the rotational motion of the target objects. Note that for objects without those intuitive features, such as a plain rotating disk, in practice we can easily annotate them, e.g. with stickers or patterns, to create such trackable features. In addition, without loss of generality in this work we assume the rotating objects are rigid bodies, i.e., there is no significant deformation during their motion.
In this context, to accurately track the rotational motion of these features, e.g. the blades of the propellers, an important parameter to determine is the angle of rotational symmetry, i.e. the smallest angle for which a feature can be rotated to coincide with itself or the other features. In our case, this is used to determine the appropriate length of event streams for the later ICP-based registration to avoid ambiguity.
For example, Figure 7 shows an event stream caused by a rotating object with three separated blades. K-means ++ (Arthur and Vassilvitskii, 2007) with Davies-Bouldin Index (DBI) evaluation is applied to separate different blades in event stream with only a small number of events (e.g., ). Because when the number of events become large, the events generated by different blades will be entangled spatially due to the rotation. Then the angle of symmetry can be determined by the number of repeated parts, e.g., blades.

3.2. Rotational Speed Estimation
After the event streams corresponding to different rotating targets are extracted, an ICP-based registration approach is proposed to estimate the rotational speed according to the transformation of event stream overtime. To accommodate larger range of rotational speed, the approach applies a two-stage coarse-to-fine strategy: initial alignment provides a coarse estimation as a feedback to the refinement stage to obtain accurate rotational speed.
3.2.1. Initial Estimation
In EV-Tach, the rotational speed is calculated by estimating the angle that a propeller has rotated around its axis in a specific time. For example, Figure 6(a) presents two consecutive 10ms-slices of event stream generated by a rotating propeller with three blades. The two slices share 7ms overlap and the step between the two slices are 3ms. As the two slices of event stream are generated by the same propeller, the angle of rotation between the two slices can be obtained through aligning the two slices of event stream. In this paper, we propose an even-stream registration algorithm based on iterative closest point (ICP) (Besl and McKay, 1992; Chen and Medioni, 1991; Recherche et al., 1992). ICP is widely used algorithm for aligning points in 3D space. For example, in pointclouds (Rusu and Cousins, 2011) registration, It aims to find the optimal transformation (rotation and translation) from the source pointcloud to the target pointcloud by minimizing the mean square error (MSE) between the points from the source and target pointclouds after registration.


Rotations in Spatio-temporal Domain: Before touching the details of the ICP-based registration algorithm, we first briefly introduce the rotation matrix, which is the core output we need from the event stream registration to calculate the rotational speed. The output of ICP normally consists of a translation matrix and a rotation matrix. Rotation matrix describes the rotation in 3D space and can be decomposed into roll, pitch and yaw, the three independent rotations around each axis according to Euler’s rotation theorem (mathworld, 2022). As shown in Figure 7, the spatial-temporal event stream can be regarded as in three-dimension . The overall rotation matrix can be decomposed into three independent rotation matrices in the spatial-temporal domain:
(9) |
where , and are the rotation matrices to the three axes and , and are the angles in roll, pitch and yaw respectively. From Figure 7, we can easily identify, the Yaw rotation around -axis is directly related to the rotation of the propeller. Therefore, we only need to focus on the rotation matrix , which can be expressed as,
(10) |
With ICP-based registration, we can obtain the yaw rotation angle from .

ICP-based Rotational Speed Estimation: The ICP-based event stream registration works on two consecutive slices of event stream and with length , overlap and step length (), where is termed as source event stream and is the target event stream. ICP-based approach starts with the nearest neighbor search to find the correspondence between the events from and , i.e., for each event in , the closest event from is found in spatial-temporal domain. After the nearest neighbor search, we can obtain a subset consisting events from which are nearest neighbors of events in . Then the co-variance between the and is,
(11) |
where denotes the number of events in . and are the spatial-temporal positions of the event from and respectively. and are the spatial-temporal positions of the centroids. Then supportive vector decomposition (SVD) is applied to factorizing the co-variance matrix, i.e.,
(12) |
The rotation matrix is and translation matrix is . Along with Eq (9) and Eq (10), the yaw rotation angle can be estimated and the source event stream is transformed according to the rotation and translation matrices. Then the operations above are repeated and from each iteration is accumulated:
(13) |
The iteration terminates when yaw rotation diminishes, i.e., .
It is worth noting that, DVS is noisy and non-structural, the source and target event streams normally cannot perfectly aligned. To reduce the influence of misalignment on the estimation of rotation, we apply bi-directional registration by simply switching the source and target event streams. The average yaw rotation is adopted to calculate the rotational speed (in rpm) from initial alignment:
(14) |
3.2.2. Estimation Refinement
For the initial alignment stage, there is no prior knowledge on how fast the rotational speed is. Therefore, to accommodate high rotational speed, we choose a small step length, i.e., , in case the event streams from different blades are overlapped and lead to ambiguity in rotation estimation. However, when the rotational speed is slow, e.g., less than 1000rpm, only very few events are generated by the rotating object within the super short period. Considering the noisy nature of DVS, the estimation from initial alignment can be unreliable. To improve the accuracy of estimation, we propose a simple but effective refinement approach based on the coarse result from initial alignment. According to the rotational speed from initial alignment, we can extend to include more events meanwhile avoid the ambiguity caused by central symmetry: cannot lead to rotation over the angle of symmetry mentioned above, otherwise, the ICP-based registration will align two different parts of the object together and causes incorrect estimation on rotation. According to above constraints, the new step length can be inferred as,
(15) |
where is angle of central symmetry determined above, is an scale factor (¡1) to accommodate the inaccuracy of initial alignment. Then the new step length is adopted to run the ICP-based event stream registration again to obtain a refined estimation on rotational speed.
4. evaluation
In this section, we evaluate our proposed EV-Tach on datasets collected from monitoring the rotation of a customized device and compare it with laser tachometer on accuracy, robustness, convenience, etc.
4.1. Evaluation Setup
Data Collection: As shown in Figure 8, during data collection, an event-camera is used to collect raw event streams of a rotating target on the customized device and laser tachometer is also deployed as benchmark. The customized device is equipped with a servo motor whose rotational speed can be precisely controlled through an interface on a laptop and the highest rotational speed is rpm. A white plate is connected to the motor shaft as the rotating target and “propellers” can be printed and attached on the plate as requirement. The event-camera is DAVIS346 (inivation, 2022) whose spatial resolution is and temporal resolution is . DAVIS346 comes with a vari-focal CS-mount lens which can be used to extend the measurement distance. The laser tachometer (UNI-T UT372) provides high precision measurement with relative error of and the results can be easily streamed to a computer via cable. Moreover, the measurement distance ranges from . The datasets are collected by changing the rotational speed of the servo motor, distance to the target, different number of blades, and etc.


Evaluation Metrics: Relative mean absolute error (RMAE) is adopted to present the accuracy of measurement and defined as,
(16) |
where is the number of tests and = 30 in the following evaluation; is the measured rotational speed and is the ground-truth. Low RMAE means high measurement accuracy.
Competing Methods: In the evaluations on accuracy, we consider four different measurement methods to compare the accuracy of EV-Tach to the state-of-the-art laser tachometer, which are:
-
•
DVS-Fixed: DAVIS346 is fixed on a tripod and placed on the table while recording event streams via DVS.
-
•
DVS-Handheld:DAVIS346 is held in hand of a user while recording the event streams.
-
•
Laser-Fixed: laser tachometer is fixed on a tripod and placed on the table. It points to a reflective label on one of the blades.
-
•
Laser-Handheld: laser tachometer is held in hand of a user. The user tries to point to the reflective label while measuring the rotation.
4.2. Evaluation on Accuracy of Rotational Speed Estimation
In this section, we will provide extensive evaluations on the accuracy of EV-Tach against different parameter settings, including rotational speed, measurement distance, number of blades and host vibration. Laser tachometer, as the state-of-the-art rotational speed measurement tool, is chosen as the benchmark to compare with EV-Tach.
4.2.1. Evaluation on Coarse-to-fine Alignment
EV-Tach applies a coarse-to-fine strategy to refine the estimation obtained from initial alignment. To show the refinement stage really works and no further refinement is needed, we compute the RMAE of the outputs of the three different components of EV-Tach (DVS-Fixed) including initial alignment, refinement and further refinement. The RMAEs against different rotational speeds, from rpm to rpm, are shown in Figure 9. By comparing different stages, we can observe, the first refinement effectively reduces the RMAE for all rotational speeds compared with initial alignment. For example, via refinement, the average of RMAEs across all rotational speeds drops significantly from to , which means approximately 25 times improvement on accuracy. Moreover, further refinement cannot guarantee noticeable improvement and consumes extra resources, therefore, a two-stage strategy with initial alignment and one-time refinement is sufficient to obtain an accurate measurement.

4.2.2. Evaluation on Different Rotational Speeds
From this section, we will compare EV-Tach with the laser tachometer under different circumstances. First, the four different measurement methods DVS-Fixed, DVS-Handheld, Laser-Fixed and Laser-Handheld are evaluated against different rotational speeds. The measurement distance is set as cm. By changing the rotational speed of the servo motor from rpm to rpm, we can obtain the corresponding RMAEs of the four different methods as shown in Figure 10. As mentioned before, each RMAE is obtained from averaging the results of independent tests and five users are recruited for the handheld measurement. From the figure, we can observe, three methods, except for Laser-Handheld, can produce accurate measurement with RMAEs less than . Especially, Laser-Fixed achieves the lowest RMAE (). However, when the laser tachometer is held in hand, the average of RMAEs of Laser-Handheld rockets to due to the subtle movement of user’s hand. When it comes to EV-Tach, the DVS-Fixed and DVS-Handheld methods produce similar RMAEs and the average of RMAEs of DVS-Handheld is below which is over 210 times better than Laser-Handheld. Therefore, we can claim that, EV-Tach is robust to the subtle movement. While the laser tachometer, though designed as a portable device, is not suitable for handheld measurement. It is worth noting that, it is very hard to point to the small reflective label attached on the blade when the target is fast rotating. It normally takes at least tens of seconds for user to point to the correct spot then it deviates easily from the label due to subtle movement of hand. Comparatively, EV-Tach is significantly more convenient. Users only need to make the camera approximately face to the front of the rotating target and the procedure is in no time. According to our observation, 20-degrees deviation from the front view is allowed. Therefore, EV-Tach is superior over laser tachometer for ease of use.

4.2.3. Evaluation on Measurement Distance
The distance to the rotating target during measurement can be various in use. In this section, we evaluate the accuracy of the four methods against different measurement distances. Again, RMAEs of the four methods are computed by gradually increasing the measurement distance from cm to cm and the results are shown in Figure 11. Each RMAE in the figure is obtained from averaging the RMAEs obtained under different rotational speeds. From the results we can observe, Laser-Fixed is not affected by the measurement distance: as far as the reflection of laser can reach, it will produce stable and accuracy measurement. The RMAEs of the remaining methods all increases with the growth of measurement distance due to different reasons. For Laser-Handheld, with the increase of distance, it becomes more difficult for users to point the laser to the small reflective label and keep not deviate from the correct spot during the measurement. The accuracy of EV-Tach approaches, DVS-Fixed and DVS-Handheld, declines as the event stream shrinks with the growth of distance. However, it can be solved by using a zoom lens on event-camera. For example, By changing the focal-length of DAVIS346 from mm to mm, we can zoom-in on the rotating target. According to our evaluation, when the measurement distance is cm, the average of RMAEs of DVS-Handheld with mm focal-length is below which is similar to that of DVS-Handheld with mm in cm measurement distance.

4.2.4. Evaluation on Different Number of Blades
Generally, as three-blade propellers are the most common rotating targets to be seen (Chaudhuri et al., 2022), in the evaluations above, we set the number of blades to be three. However, it is possible the rotating targets are various in number of blades. For example, most of drones are equipped with two-blade propellers. We evaluate the accuracy of EV-Tach on estimating the rotational speed of propellers with two, three and four blades respectively. By gradually changing the rotational speed of servo motor, the RMAEs are calculated and presented on Figure. 12. From the results we can observe, EV-Tach achieves similar accuracy of measurement for all types of blades and the average of RMAEs are , and . Therefore, EV-Tach can work on the rotating propellers with different number of blades.

4.2.5. Evaluation on Robustness to Host Vibration

In real-world scenarios, the hosting device are sometimes not stable, e.g., vibrating. To obtain accurate measurement, the tachometer should be able to accommodate the slight motion of the hosting device to some extent. Therefore, we simulate the vibrating host to compare the robustness of EV-Tach and laser tachometer. DAVIS346 and laser tachometer are placed cm away from the servo motor and the servo motor is located on a table. A number of people are asked to shake the table to simulate the vibration. By gradually changing the rotational speed from rpm to rpm, the RMAEs are calculated and shown in Fig. 13. From the results we can observe, Laser-Fixed and Laser-Handheld produce significantly high RMAEs for measuring the rotating target on a vibrating host: the average of the RMAEs are 547‰ and 399‰ respectively which indicates laser tachometer completely fails in providing reasonable measurement results. On the contrary, EV-Tach demonstrates great robustness to the vibrating host: the average of RMAEs is only for fixed and for handheld which is close to the stable host scenario and is almost times better than laser tachometer.
5. DISCUSSION
As the description and evaluation in this paper, EV-Tach shows a number of superior characteristics on the task of rotational speed measurement over the state-of-the-art laser tachometer which is dominant in the market. First and foremost, EV-Tach is a real handheld tachometer and it is robust to subtle movement of user’s hand. While, though the laser tachometer is designed as a portable device, its accuracy drops significantly when used as handheld. Second, the use of EV-Tach is more convenient than laser tachometer and no preparation is needed before measurement; while laser tachometer must be pointed to the small reflective labels and it takes users about tens of seconds when the target is fast rotating. Third, EV-Tach is robust to the vibrating host of the rotating target which causes failed measurement for laser tachometer. Fourth, the EV-Tach is able to measure multiple rotating targets at the same time, which is impossible for laser tachometer. Finally, compared with the other vision-based method, it achieves significantly higher range of measurement than those with conventional RGB cameras and is more cost-effective than those applying high-speed cameras.
However, as the principal of DVS, EV-Tach also shares some similar limitations to the vision-based approaches. First of all, it requires the rotating targets in form of propellers or with uneven texture so that different phases of rotation can be detected. However, like the reflective label for laser tachometer, the usability of EV-Tach can be extended if unique pattern or labels are allowed to be attached on the rotating object. For example, when the rotating object is a flat disc with uniform texture, a label (e.g., a straight line), which is high contrast to the disc, can be attached to aid the measurement. Second, constrained by the hardware design, the accuracy and range of measurement of EV-Tach in this paper is lower than laser tachometer. However considering the spatial resolution of DAVIS346 is only , the performance of EV-Tach is expected to be improved by using event-cameras with higher spatial and temporal resolution.
6. Conclusion
In this paper, we propose, EV-Tach, a rotational speed measurement system based on dynamic vision sensing on mobile devices to achieve efficient high-fidelity and convenient estimation. EV-Tach starts with extracting multiple rotating targets via K-means clustering and a heatmap-based initial centroids selection method is propose to improve the robustness of the clustering. Then angle of rotation is estimated via a coarse-to-fine ICP-based event streams registration method and rotational speed can be calculated afterwards. At last, Extensive evaluations are conducted and the results show that the accuracy of EV-Tach is comparable to laser tachometer in fixed deployment and is over better in handheld measurement mode. EV-Tach is robust to shaking host of rotating target in which the laser tachometer fails to provide reasonable results.
References
- (1)
- Amir et al. (2017) Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, et al. 2017. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7243–7252.
- Arthur and Vassilvitskii (2007) David Arthur and Sergei Vassilvitskii. 2007. K-Means++: The Advantages of Careful Seeding (SODA ’07). Society for Industrial and Applied Mathematics, USA.
- Benosman et al. (2013) Ryad Benosman, Charles Clercq, Xavier Lagorce, Sio-Hoi Ieng, and Chiara Bartolozzi. 2013. Event-Based Visual Flow. IEEE Transactions on Neural Networks pp (11 2013), 1. https://doi.org/10.1109/TNNLS.2013.2273537
- Besl and McKay (1992) P.J. Besl and Neil D. McKay. 1992. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 2 (1992), 239–256. https://doi.org/10.1109/34.121791
- Bi et al. (2019) Yin Bi, Aaron Chadha, Alhabib Abbas, Eirina Bourtsoulatze, and Yiannis Andreopoulos. 2019. Graph-based object classification for neuromorphic vision sensing. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 491–501.
- Chaudhuri et al. (2022) Anudipta Chaudhuri, Rajkanya Datta, Muthuselvan Praveen Kumar, João Paulo Davim, and Sumit Pramanik. 2022. Energy Conversion Strategies for Wind Energy System: Electrical, Mechanical and Material Aspects. Materials 15, 3 (2022). https://doi.org/10.3390/ma15031232
- Chen and Medioni (1991) Y. Chen and G. Medioni. 1991. Object modeling by registration of multiple range images. In Proceedings. 1991 IEEE International Conference on Robotics and Automation. 2724–2729 vol.3. https://doi.org/10.1109/ROBOT.1991.132043
- Davies and Bouldin (1979) David L. Davies and Donald W. Bouldin. 1979. A Cluster Separation Measure. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-1, 2 (1979), 224–227. https://doi.org/10.1109/TPAMI.1979.4766909
- directindustry (2022) directindustry. 2022. The Basics of Centrifuge Operation and Maintenance. https://www.directindustry.com/industrial-manufacturer/mechanical-tachometer-135087.html
- Gallego et al. (2022) Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J. Davison, Jörg Conradt, Kostas Daniilidis, and Davide Scaramuzza. 2022. Event-Based Vision: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 1 (2022), 154–180. https://doi.org/10.1109/TPAMI.2020.3008413
- Gallego and Scaramuzza (2017) Guillermo Gallego and Davide Scaramuzza. 2017. Accurate Angular Velocity Estimation With an Event Camera. IEEE Robotics and Automation Letters 2 (2017), 632–639.
- Haynes (2022) Haynes. 2022. Vehicle Inspections. https://haynes.com/en-us/tips-tutorials/what-know-about-vehicle-inspections-all-50-states
- inivation (2022) inivation. 2022. davis346. https://shop.inivation.com/collections/davis346
- Kim et al. (2016) Hyuno Kim, Yuji Yamakawa, Taku Senoo, and Masatoshi Ishikawa. 2016. Visual encoder: robust and precise measurement method of rotation angle via high-speed RGB vision. Opt. Express 24, 12 (Jun 2016), 13375–13386. https://doi.org/10.1364/OE.24.013375
- Li et al. (2019) Lin Li, Hongli Hu, Yong Qin, and Kaihao Tang. 2019. Digital Approach to Rotational Speed Measurement Using an Electrostatic Sensor. Sensors (Basel, Switzerland) 19 (2019).
- mathworld (2022) mathworld. 2022. EulerAngles. https://mathworld.wolfram.com/EulerAngles.html
- Natili et al. (2020) Francesco Natili, Francesco Castellani, Davide Astolfi, and Matteo Becchetti. 2020. Video-Tachometer Methodology for Wind Turbine Rotor Speed Measurement. Sensors 20, 24 (2020). https://doi.org/10.3390/s20247314
- Qi et al. (2017a) Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017a. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 652–660.
- Qi et al. (2017b) Charles R. Qi, Li Yi, Hao Su, and Leonidas J. Guibas. 2017b. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 5105–5114.
- Recherche et al. (1992) E Recherche, Et Automatique, Sophia Antipolis, and Zhengyou Zhang. 1992. Iterative Point Matching for Registration of Free-Form Curves. Int. J. Comput. Vision 13 (07 1992).
- Rusu and Cousins (2011) Radu Bogdan Rusu and Steve Cousins. 2011. 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation. IEEE, 1–4.
- UNI-T (2022) UNI-T. 2022. UT372 Tachometer. https://www.uni-trend.com/meters/html/product/Environmental/Environmental_Tester/UT370_Tachometers/UT372.html
- Wang et al. (2019b) Qinyi Wang, Yexin Zhang, Junsong Yuan, and Yilong Lu. 2019b. Space-time event clouds for gesture recognition: From RGB cameras to event cameras. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1826–1835.
- Wang et al. (2018) Tianyu Wang, Yong Yan, Lijuan Wang, and Yonghui Hu. 2018. Rotational Speed Measurement Through Image Similarity Evaluation and Spectral Analysis. IEEE Access 6 (2018), 46718–46730. https://doi.org/10.1109/ACCESS.2018.2866479
- Wang et al. (2019a) Yanxiang Wang, Bowen Du, Yiran Shen, Kai Wu, Guangrong Zhao, Jianguo Sun, and Hongkai Wen. 2019a. EV-Gait: Event-based robust gait recognition using dynamic vision sensors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6358–6367.
- Wang et al. (2017) Yunfan Wang, Lijuan Wang, and Yong Yan. 2017. Rotational speed measurement through digital imaging and image processing. In 2017 IEEE International Instrumentation and Measurement Technology Conference (I2MTC). 1–6. https://doi.org/10.1109/I2MTC.2017.7969697
- Wang et al. (2021) Yanxiang Wang, Xian Zhang, Yiran Shen, Bowen Du, Guangrong Zhao, Lizhen Cui Cui Lizhen, and Hongkai Wen. 2021. Event-stream representation for human gaits identification using deep neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
- Zhai et al. (2019) Yanwang Zhai, Shiyao Fu, Ci Yin, Heng Zhou, and Chunqing Gao. 2019. Detection of angular acceleration based on optical rotational Doppler effect. Optics Express 27 (05 2019), 15518. https://doi.org/10.1364/OE.27.015518
- Zhang et al. (2008) Zhe Zhang, Junxi Zhang, and Huifeng Xue. 2008. Improved K-means clustering algorithm. In 2008 Congress on Image and Signal Processing, Vol. 5. IEEE, 169–172.
- Zhao et al. (2018) Yipeng Zhao, Yongbin Li, Shijie Guo, and Tiejun Li. 2018. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter. Journal of Sensors 2018 (03 2018), 1–9. https://doi.org/10.1155/2018/1037083
- Zhong et al. (2018) Jianfeng Zhong, Shuncong Zhong, Qiukun Zhang, and Z.K Peng. 2018. Measurement of Instantaneous Rotational Speed Using Double-sine-varying-density Fringe Pattern. Mechanical Systems and Signal Processing 103 (03 2018), 117–130. https://doi.org/10.1016/j.ymssp.2017.10.011
- Zhu et al. (2018) Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. 2018. EV-FlowNet: Self-supervised optical flow estimation for event-based cameras. arXiv preprint arXiv:1802.06898 (2018).
- Zhu and Yu (2011) Xiao-dong Zhu and Song-nian Yu. 2011. Measurement angular velocity based on video technology. In 2011 4th International Congress on Image and Signal Processing, Vol. 4. 1936–1940.