In this paper, we would like to present and discuss a system that automatically classifies coins. This flexible system can identify coins having different features and being photographed in different light conditions. For this purpose, a set of strong techniques for thresholding, edge detection and frequency transform were used in order to generate a fingerprint as meaningful and as invariant as possible for every coin class. Usually, the capturing of digital images cannot be performed in best conditions and inconsistencies can arise due to various lighting conditions as well as the performance of the capturing device. This article is here to suggest a method to reduce problems generated by lighting, so that image characteristics are more accurate. The proposed solution improves an existing automatic coin classification algorithm by applying illumination correction before the actual classification.
Keywords: Canny edge detection, coin classification, DCT transform, lighting invariant, Otsu thresholding, rotation invariant.
There are institutions working with historical coins, which could use a system for automatic coin classification. A system like this can also be useful to historians or even private coin collectors or other people interested in this domain. A free online service, able to perform clustering on coins based on their similarities is very useful for the community. The same system can also be used in sorting or classification of the various types of coins that were collected in the European countries after the euro was released.
- RELATED WORK
In the modern scientific literature various approaches regarding the subject of coin recognition are present.
A system based on a rotation-invariant neural network was described by  as a system capable of identifying some Japanese coins (a 500 yen and a 500 won piece). By generating the rotational group for a coarse model of the coin and collecting all the facts into a neural network, the rotational invariance can be achieved. The neural network method has some drawbacks, for example, the rules for coin rejection are not very clear. It is very important to be able to reject coins, but at the same time, there is no possibility of knowing in advance which kind of coins will be given to the system.
Strategies like decision trees , neural network and Bayesian classifiers were compared by  and . They obtained a form of the decision tree algorithm capable of rejecting coins in case that their defining attributes were outside an acceptance range. As a drawback, this method is difficult to extend to images.
In  an approach based on color images is presented. Translational invariance was obtained through segmentation and rotational invariance from using polar coordinate representation and correlation.
A comprehensive up to date system is described in , and two of the most used approaches (edge matching features in polar coordinates and Eigenspace matching) are compared. The proposed approach falls in the first class.
If one would think that this is a trivial problem, because automatic machines that receive coins are everywhere today, it is enough to state that those machines work by measuring custom physical properties like: weight, diameter, thickness, permeability, conductivity, electromagnetic field alteration and are thus only capable to distinguish between a limited number of elements.
For old coins this is even a harder task to achieve, as they were made from handmade molds and the manufacturing process from molds is also very error prone.
One example is Figure 1 where two coins of the same type are compared. The general shape is the same, but at over-imposing them, it can be seen that even the important edges have different shapes and positions. A much more permissive comparison system is needed in order to match these coins, but it must also be able to compare strictly modern coins, which are constructed on an automated line and are much more similar.
Fig. 1. Comparison of two coins of the same type. Even general contours don’t match.
One approach considering this issue  tries to align the image using SIFT features borrowed from image alignment. Wrapping the coin over all the database coins yields the class, by choosing the smallest energy.
- FEATURE EXTRACTION
Using more characteristics extracted from coin images, the precision in the identification of similar coins can be increased.  and  introduce three main features related to coin images:
- the picture on the coin (stamp)
- the texture on the coin
- the text on the coin
This approach concentrates on the first two characteristics. For a proper measurement of the subsequent features, a coin image preprocessing phase is needed, so that the inserted images may reflect as precise as possible uniform lighting conditions and contrast normalization. One such very important preprocessing step in coin analysis is the removal of the specular reflection, which strongly alters almost all the coin photos. A new application was constructed including the following processing and new results are offered in the conclusions section.
3.1. Specular reflection removal
Because the viewing position and position of the source of light are not a factor that should influence which parts of the coins are visible, a specular removal phase could aid in normalizing the photographic conditions. Usual specular removal methods work only on color images (the analysis of chromaticity, for example) in order to deduce the intensity of the specular component and for diminishing it.  When light hits an object, some of the rays will immediately be reflected by the outer surface, while others will penetrate the object. The rays that managed to get through the surface will either pass through the material (transmitted or absorbed) or will be reflected out of the object again, but scattered by the material’s particles. The immediately reflected rays are called specular reflections, and the ones that penetrated the object and were reflected back are called diffuse reflections. Thus, the specular light is highly oriented and not influenced by the material’s properties, because it promptly bounced; while the diffuse is scattered by the random particles inside the object and is strongly influenced by them. In order to be able to separate them more accurately, a good difference between them must be described:
- The reflections have different degrees of polarization (DOP). Specular reflection is generally more polarized than diffuse reflection. 
- The intensity distribution of diffuse reflections can be well approximated by the Lambert’s Law . The intensity distribution of specular reflections follows the Torrance-Sparrow reflection model . By isolating the diffuse component, powerful Lambertian-based tools can be applied to real world scenes, for recognition and reconstruction.
- The color of specular reflection is determined by the object’s surface spectral reflectance, which is mostly constant throughout the visible spectrum. This causes the color of specular reflections to be similar to the light’s source, while the color of diffuse reflection to be determined by the object’s body spectral reflectance.
Referring to the fact that all coins are made from a single material, we can affirm that they are in most cases monochrome, so, usual methods do not work, and thus by processing them in grayscale the quality of classification will not change.
The specular component can be obtained in luminance space, by the binarization of the image (fig.2). Therefore, the final image that is going to be processed will result by subtracting that specular component from the L component.
This step has to be performed before the actual edge detection, in consideration of finding a more uniform edge distribution.
Fig. 2. Original image (left) compared with the preprocessed (right)
3.2 Stamp (picture) representation
The stamp is the most important part of a coin’s fingerprint. This is based on the statistical distribution of the edges that appear on the surface of the coin. The extraction of the edge characteristic is a process consisting of three steps:
- edge detection
- edge distance distributions
- edge angle distribution
In paper  a similar method describing the edge distance and angle distributions is presented.
3.2.1 Edge detection
Based on our practical tryouts and on Rangarajan’s presentation  of a number of various edge detection algorithms, we came to the conclusion that the most indicated one for this particular project is Canny’s edge detection algorithm. We also carried out experiments with the Sobel kernel edge detection, Prewitt’s operator, Robert’s cross operator. Other algorithms exist, which would be able to improve this edge detection .
The algorithm of Canny has six steps :
Step 1. Filter out any noise in the original image with the purpose to avoid detection of false edges. This is done by applying a Gaussian filter or other filters for specific purposes . The Gaussian used in the implementation is presented in Fig. 3.
Fig. 3. Discrete approximation to Gaussian function with σ=1.4
Step 2. Compute the gradient of the image in order to determine the edge strength by first applying the Sobel operator on the image to compute a 2-D spatial gradient on both axes and then approximating the absolute gradient magnitude (edge strength) for each point.
The magnitude is approximated using the next formula:
Step 3. Computing the direction of the edge using the gradient in the X and Y directions:
Step 4. Now the edge direction is computed, so the next thing to do is to relate the edge to a direction that can be traced in an image.
Given a 5×5 image as the one described in Fig. 5 and analyzing pixel “P” it can be seen that there are only four possible directions when describing the surrounding pixels – 0 degrees (for the horizontal direction), 45 degrees (along the main diagonal), 90 degrees (for the vertical direction), or 135 degrees (along the secondary diagonal).
x x x x x
x x x x x
x x P x x
x x x x x
x x x x x
Fig. 5. Finding possible directions
Step 5. With the edge direction estimated, non-maximum suppression is applied along the edge and removes any border pixel, so the result is a thin line in the output image.
Step 6. At the end, a hysteresis processing is employing as a means of eliminating potential streaking. This streaking, which is the breaking of an edge contour, might occur due to the operator output fluctuating above and below the threshold. Hysteresis doesn’t use a single threshold, but two: a high one and a low one. A pixel with a value greater than the high threshold T1 is presumed to be an edge pixel and is marked as such instantly. A pixel that is connected to this edge pixel with a value greater than the lower threshold T2 is also selected as an edge pixel. Following an edge can therefore be expressed as finding a gradient of at least T1 and continuing as long as the value does not fall below T2. The method can be improved as indicated by .
These two thresholds can be calculated using Otsu’s algorithm, an approach that represents a binarization algorithm , widely used, and which has spawned many forms and adaptations , , because of its stability and high speed. It is a threshold based, global binarization algorithm, which tries to maximize the inter-class variance (or minimize the intra-class variance). The inter-class variance is defined as follows:
where stands for the number of pixels in the image with the value i . So the equation tries to separate the means of the cluster, while also keeping each cluster with a high probability of occurrence.
An exhaustive search is employed. For many optimal thresholds the accepted value is the mean value . This is very important for the quality of the binarization after the post-processing threshold stage.
The final step of Canny’s edge detector uses two thresholds. The best values for them can be experimentally determined for a given image, but the method cannot be considered for our project because we need to elaborate a way to set these values automatically.
We can find a solution for this by determining the threshold with Otsu, for a given coin image, deriving our thresholds T1 and T2 based on this value. We experimentally deducted that T1 is equal to 50% of the threshold of Otsu and T2 is equal to 30% of the same parameter.
Some problems occurred during the experiments regarding medieval and older coins classification: those types of coins mostly have an unregulated, bigger stamp height, a characteristic that is not met on modern coins which tend to be flattened. The light condition and the way shadows are formed when taking the picture, may create the possibility to obtain different pictures of the same coin with totally different results regarding the edges detected using Canny’s edge detector.
The program must also overcome the diffuse illumination variation problem, because the presented work considers photographs taken in an uncontrolled environment. A method proposed by  is based on transforming the image from the RGB space to LUV space. A modern coin image and a medieval one transformed in LUV space and its three composing parts: The L, U and V image components were obtained by scaling the LUV values to 0-255 is shown in fig. 6.
Fig. 6. A modern (upper row) and medieval (lower row) coin picture a) original picture; b) L-component; c) U-component; d) V-component
Using the Canny’s edge detector on the image L-component we gained better results than using the grayscale representation of the image (Fig. 7).
Fig. 7. Grayscale image and corresponding Canny’s edges (A1, A2) Image’s L-component and associated Canny’s edges (B1, B2)
3.2.2 Edge distance distribution
The distribution of the distances between the pixels on the edges and the center of the coin is measured by the edge distance distribution. This is obtained by partitioning the coins in circular concentric parts, as it is shown in fig.8.
Fig. 8. Distance distribution
The number of edge pixels in each part is accumulated, and the resulting histogram is normalized for providing an estimation of the edge distance distribution. Edge distance distributions are rotation invariant by definition.
Edge distance distributions are used in a multiscale method, by measuring the histograms for different numbers of bins (for 2, 4, 8, 16, and 32 bins).
3.2.3 Edge angle distribution
Not all the information is incorporated in edge distance distributions, although those instruments appear to be very powerful and occupying an important place in coin classification. For example, the relative angular distribution of the edge pixels is not represented. This can be expressed using the edge angle distributions, which are measured by dividing the coin in pie-shaped parts, like represented in fig.9. The edge pixels present in the part are accumulated and the histogram that results is normalized in order to provide an estimation of the relative angular distribution of the coin edge pixels.
The difference between the edge distance distributions and edge angle distribution is that the second ones are not rotation invariant by definition. Rotation invariance of the edge angle feature can be obtained by computing the magnitude of the Fourier transform of the obtained histogram. This stage makes the histogram invariant under circular shifts (which correspond to rotations of the coin). In this respect, a large number of bins in the histogram are required, since a rotation of the coin should imply a circular shift on the histogram, instead of a change in the histogram accumulators.
In this application the edge angle histogram is measured in scales of 240 bins, compressing a spatial signal by transforming it to the frequency domain and dropping high-order values while keeping low-order ones.
Fig. 9. Angle distribution
These distributions are well suited for the mentioned problem in the introduction; as the surface of the classes can be modified in order to accommodate the large errors of the old coins, or the small errors of modern coins. In Figure 10 it can be seen that the main regions fall inside the same segment.
Fig. 10. Splitting the coins: the general features fall in the same resulting segments.
3.3 Texture of the coin
The texture component of the fingerprint is based on two DCT (Discrete Cosine Transform) parts of the LUV representation of the coin, image inspired by the work of .
The discrete cosine transform makes possible the splitting of the image into sections/spectral sub-bands of differing importance (respecting the image’s visual quality). The DCT makes the transformation of a signal from the spatial domain into the frequency domain. The values obtained by applying the DCT have coincidentally order from the lowest to highest frequency. This characteristic and the psychological observation that the human eye is less sensitive in recognizing the higher-order frequencies, makes the lead to the possibility of compressing a spatial signal by transforming it to the frequency domain and dropping high-order values and keeping low-order ones.
A 2D-DCT is calculated over the luminosity (L-channel) of the image . The first coefficient represents the DC value, or average luminosity of the image. The other one represents the higher order values with increasing frequency. Some of these coefficients (the first 10 of them) are normalized for the grayscale fingerprint section. This part of the fingerprints represents the basic composition of the image.
After that, a 2D-DCT of the two color components are calculated and used for the color part of the fingerprint. In this case, just the first three of each are taken into consideration, since the human eye is much more sensitive to luminosity than to color. This part of the fingerprint represents the color composition of the image, with reduced spatial resolution compared to the gray scale part.
The 2D DCT is given by the formula:
- CLASSIFICATION AND IDENTIFICATION
The two following stages deal, in order, with the topics of classification and identification.
In the previous part, we showed some coin-specific characteristics which can be used to tag a coin image. This part describes how to make use of the features in order to decide upon the class of a coin in a clear way.
The fingerprint of an image has the following composition:
- five distance distribution vectors (for 2, 4, 8, 16 and 32 bins)
- a Fourier transform of the magnitude vector over the angle distribution for 240 bins
- first 8 components of the DCT over the L-channel
- first 4 DCT components for the U-channel
- first 4 DCT components for the V-channel.
By calculating the nearness between the images (through the calculation of a distance value), the similarity can be checked. Logically, the best solution for this purpose is to make use of the Euclidian distance between two points in an N-dimensional space, obtained by using the following formula:
The identification has two steps:
At first, to preselect a group of images which have similar texture; we have to use the DCT method for fingerprinting.
Then, the distance and angle distribution method is used to identify three best matches from the group preselected at the first step.
The experiments were performed on a Core2DUO machine with 2GB of RAM, Windows 7 OS, 64 bit.
We tested a set of about 120 free coin pictures taken from the web. So the images were acquired by various persons which were using different methods for capturing them (resulting various conditions of illumination and unpredictable rotation angles), also with different sizes and DPI (dots-per-inch) values.
The process of determining the characteristics of a coin is fast, with an average duration of 13.8 milliseconds. Nevertheless, as it’s been said before, because of heterogeneity of the input material, the period of time could go as low as 9 milliseconds, or even as high as 35 milliseconds.
The characteristics of these reference coins were stored in a database, in order to be used to match against another set of test coins, completely different. Due to the relatively modest coin collection available, the bulk of the processing was allocated to feature extraction of the target coin as opposed to matching against reference coins. For a real coin catalogue, it is expected that most of the effort would be put into comparing the coin against every data stored in the database, which could be easily parallelized.
The experiments made on modern coins were very promising with fine results in respects of accuracy. The wrong classifications were obtained only because of the poor image quality (like a very dark picture).
It was expected that tests performed on medieval or older coins were not as convincing as those made on modern ones, but someone it has to take into consideration the fact that there are some problems associated with pre-modern coins : they are not a result of a factorial process, the variety in the stamps of medieval coins is inferior to the variety in stamps of modern coins and they are often strongly eroded because of the frequent use and passing of time.
This paper meant to show a method for automatic classification of coin pictures using two effective characteristic types, a method which obtained very good results on modern coins.
Fig. 11. Coins that are sometimes misclassified, mostly due to their shape inconsistencies and surface defects.
The classification of old coins proved not to be as efficient as for the modern ones. Furthermore, our work showed an approach for dealing with photo capture weaknesses (e.g. illumination problems) thus the different lighting conditions, the various shiny parts due to specular reflection are corrected up to a certain degree so that the final classification can contain the least amount of errors as possible. The current paper is an extension to the one presented in .
In Fig. 11 a set of coins with classification problems is presented. It can be observed that most of the problems are generated by the advanced state of deterioration of the reference coins thus any post-processing algorithms are not sufficient to restore the individual coin fingerprints to a reasonable level.
In Fig. 12 a set of coins that are usually correctly classified is presented. The coin fingerprinting mechanism in case of this set seems to work fine, thus remaining unaltered when artificial rotations and post-processing lighting conditions are applied.
Fig. 12. Coins that usually tend to be correctly classified when artificial rotations and lighting effects are applied.
The results may be considered satisfactory in the end due to the obvious deterioration of the misclassified coins and having in mind that some visibly deteriorated coins are usually classified correctly. However it is important to say that the database size is rather small and may not be considered representative for large batch classifications. Having much more coin classes to classify will obviously result in a smaller average distance between the different coins fingerprints, thus making the process more sensitive to errors.
On the other hand, when dealing with a small number of classes, even in the case of large input batches, the presented approach should lead to good results for both the classification quality and the processing speed.
- FUTURE WORK
The next steps of the work should be concentrated in solving the known problems of the system, but we have to mention that a lot larger input database will decrease the actual percentage of correct identification and classification.
Another important aspect of the future work should refer to improvements for the case of medieval coins classification. A system that uses images of coins and creates a pyramid of different sizes of the original image will be implemented.
Each of the system components has a package (a set of unique up-sampling – down-sampling filters) that is used to recursively construct a pyramid.
The operations involved in generating a new level are:
- down-sampling the image of the current level, gaining an image half its length and half its width
- up-sampling the same image, obtaining an approximation of the original image
- obtaining a residual image by extracting the differences between the original image and the up-sampled one
- using the residual image in fingerprinting and classification next to some of the current fingerprints
Fig. 13 stands for an example of the resulting pyramid. When a cycle ends, the down-sampled image becomes a new layer in the pyramid. The processing goes on by creating more levels until a small enough image is generated. The limit is a linear image, with either only one row or only one column of pixels.
There is a possibility that the system of pyramidal coding offers better signatures and faster classification . Some candidates could be rejected from lower pyramidal levels where signatures are easier to compute and to compare one against an entire dataset.
Another fact which can be considered an advantage is that lower levels are not very sensitive to noise, illumination and degradation of the coin.
Fig. 13. Constructing a pyramid of different image resolutions
Fig. 14. Diffuse image (left) vs Depth map (right – taken from ) – the edge detection is more uniform for the depth map
Perhaps, the method that promises to offer the most stable results considers the actual three-dimensional shape of the coins.  It was experimented even with historical coins using a scanning device relying on stereo vision and structural light. The results are very good, the scanner being able to pick up fine shapes at the surface of small coins. In order to keep this system simple and cheap for the end user, a good option is to use only light information in order to generate the 3D shape.  The heights will be converted to height map and it will be used together with the texture for classification.
One simple to implement method is to use multiple illumination angles and combine the information in order to find the depth, using the specular lights. This will mean more work for the user, but if precision is needed, he/she will have to offer multiple images; otherwise, only one image will be enough to generate partial results.
In Figure 14, the depth map was generated by photographing the coin from 4 different angles (by rotating the coin and keeping the light source fixed). These images were combined in a Normal Map image and then converted to a parallax bump map. The canny edge contours follow more accurate the coin, the text being more evident, the edge density is closer to the real edge density of the coin.
The authors want to show their appreciation to Andrei Tigora, Mihai Zaharescu and Alexandru Gradinaru for their great ideas, support and assistance with this paper.
-  Adameck, M., Hossfeld, M., and Eich, M. (2003). Three color selective stereo gradient method for fast topography recognition of metallic surfaces. Machine Vision Applications in Industrial Inspection, 11, Pages 128-139, DOI: 10.1117/12.473968.
-  Alessandro A., Francesco B., and Dmitry C. (2011). A Survey of Specularity Removal Methods. COMPUTERGRAPHICS forum, Volume 30, Issue 8 Pages 2208–2230.
-  Bag, S., Bhowmick, P., Behera, P., and Harit, G. (2011). Robust binarization of degraded documents using adaptive-cum-interpolative thresholding in a multi-scale framework, In Proceedings of International Conference on Image Information, Pages 1-6.
-  Bourgin, D. (2009). Color spaces FAQ. Available from: http:// www.ilkeratalay.com/ colorspacesfaq.php. [Accessed 1st November 2013].
-  Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 8, Issue 6, Pages 679-714.
-  Chai, H.Y., Wee L.K., and Supriyanto, E. (2011). Edge detection in ultrasound images using speckle reducing anisotropic diffusion in canny edge detector framework. In Proc. 15th WSEAS international conference on Systems, Pages 226-231.
-  Davidsson, P., (1996). Coin classification using a novel technique for learning characteristic decision trees by controlling the degree of generalization. Ninth International Conference on Industrial & Engineering Applications of Artificial Intelligence & Expert Systems (IEA/ AIE-96), Pages 403–412.
-  Deriche, R. (1987). Using Canny’s criteria to derive an optimal edge detector recursively implemented. International Journal of Computer Vision, Volume 1, Issue 2, Pages 167–187.
-  Fukumi, M., Omatu, S., and Kosaka, T. (1992). Rotation invariant neural pattern recognition system with application to coin recognition. IEEE Transactions on Neural Networks, Volume 3, Issue 2, Pages 272-279.
-  Green, B. (2002). Canny edge detection tutorial, Available from: http :// dasl. mem. drexel. edu/ alumni/ bGreen/ www. pages. drexel. edu/ _weg22/ can_tut. html. [Accessed 23rd November 2013].
-  Jianzhuang, L., Wenqing, L., and Yupeng, T. (1991). Automatic thresholding of gray-level pictures using two-dimension Otsu method, In Proceedings of International Conference on Circuits and Systems, Pages 325-327.
-  Lambert J.H. (1760) Photometria Sive de Mensura de Gratibus Luminis, Colorum et Umbrae. Augsberg, Germany: Eberhard Klett.
-  Maaten, L.J.P. and Boon, P.J. (2006). COIN-O-MATIC: a fast and reliable system for coin classification. In Proceedings of the MUSCLE Coin Workshop, Berlin, Germany, Pages 7-17.
-  Maaten, L.J.P. and Postma, E.O. (2006). Towards automatic coin classification, In Proceedings of EVA-Vienna, Pages 19-26.
-  Menon, V., Babbar, U., and Dasguta U. (2009). Web-based image similarity search, unpublished.
-  Miyazaki D., Tan R.T., Hara K. and Ikeuchi K. (2003) Polarization-Based Inverse Rendering from a Single View. In Proc. IEEE International Conference Computer Vision (ICCV ’03).
-  Netz A. and Osadchy M., (2013), Recognition Using Specular Highlights. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 35, Issue 3, Pages 639-652, March 2013, doi: 10.1109/ TPAMI.2012.127.
-  Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, Volume 9, Issue 1, Pages 62-66.
-  Quinlan, J.R. (1986). Induction of decision trees. IEEE Transactions on Machine Learning, Volume 1, Issue 1, Pages 81–106.
-  Rangarajan, S. (2005). Algorithms for edge detection, Available from: http:// www.ee.sunysb.edu/ ~cvl/ ese558/ s2005/ Reports/ Srikanth%20Rangarajan/ submission.doc. [Accessed 3rd November 2013].
-  Reinhold, H.M., Nölle, M., Rubik, M., Hödlmoser, M., Kampel, M. and Zambanini, S. (2012). Chapter 7 – Automatic Coin Classification and Identification, In Advances in Object Recognition Systems – Publisher: InTech, May 09, 2012, Edited by Ioannis Kypraios.
-  Sebastian, Z., Martin, K. (2011). Automatic coin classification by image matching. VAST’11 Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage, Switzerland, Pages 65-72.
-  Sung, T.-Y. (2007). Memory-efficient and high-performance 2-D DCT and IDCT processors based on CORDIC rotation, In Proceedings of 7th WSEAS International Conference on Multimedia Systems & Signal Processing, Hangzhou, China, April 15-17 2007, Pages 7-12.
-  Tică, S.N., Boiangiu, C.-A., and Tigora, A. (2013). A method for automatic coin classification. In Proceedings 1st WSEAS International Conference on Image Processing and Pattern Recognition, Budapest, Hungary, December 10-12, Pages 188-198.
-  Tică, S. N., Boiangiu, C. A. and Tigora, A. (2014). Automatic Coin Classification, International Journal of Computers, Volume 8, 2014, Pages 82-89.
-  Tică, S. N., Boiangiu, C. A., Bucur, I. (2014), A System for Automatic Coin Identification and Classification, Proceedings of IWoCPS-3, The Third International Workshop On Cyber Physical Systems, Bucharest, Romania, May 29-30, 2014, Pos 11.
-  Tigora, A. (2013). An image binarization algorithm using watershed-based local thresholding. In Proceedings of 1st WSEAS International Conference on Image Processing and Pattern Recognition (IPPR ’13), Budapest, Hungary, Pages 154-160.
-  Torrance K.E. and Sparrow E.M. (1966). Theory for Off-Specular Reflection from Roughened Surfaces. In Journal of Optical Society of America, Volume 57, Issue 9, Pages 1105-1114.
-  Thread “Creating normal maps of real coins from 2D scans”: http:// blenderartists.org/ forum/ showthread.php? 245418- Creating- normal- maps- of- real- coins- from- 2D- scans- cycles- render- of- US- 0- 25 [Accessed at 17th July 2014].
-  Zaharescu, M. and Petrescu, C.I. (2013). Edge detection in document analysis. Journal of Information Systems & Operations Management, Volume 7, Issue1, Pages 156-165.
-  Zambanini, S., Schlapke, M., Kampel, M. and Müller, A. (2009). Historical Coins in 3D: Acquisition and Numismatic Applications. 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST’09, Pages 49-52.
-  Zhang, W. and Bergholm, P. (1997). Multi-scale blur estimation and edge type classification for scene analysis, International Journal of Computer Vision, Volume 24, Issue 3, Pages 219-250.