1. Introduction
As the world becomes more technology-driven and consumers prioritize convenience for their shopping and lifestyles, smartphones serve important roles in diverse areas of our daily lives. The fashion industry, especially fashion e-commerce companies, often fails to prevent product returns due to improper fits. While returns of in-store clothing purchases are about 5–10%, online return percentages have soared up to 40% [1]. In fact, as the fashion e-commerce industry continues its growth, it is forecasted to be facing a trillion-dollar problem in the form of returns [1]. This industry challenge shows the potential of smartphones to provide a convenient and rapid fit detection method to solve the sales and return issues in the industry. Based on the advanced methods of using smartphones, we can take body measurements more accurately and compare different clothing options based on these measurements. Thus, fashion businesses can eventually take advantage of increasing sales and decreasing returns, which is currently a huge waste [1]. Even though some technology companies have been trying to solve digital fit issues, there are still unresolved pains in the fashion market, as these innovations are in their early stages.
Various body measurement tools have been developed in recent years, with many utilizing different techniques to obtain body measurements. Some of the most common methods include full-sized 3D body scanners, mobile or handheld scanners, and wearable scanners [2,3]. However, weaknesses in all of these devices exist due to the size of the measuring devices and their inconvenient ways of taking body measurements. Therefore, one type of scanner, the mobile smartphone-based scanner, shows promise for mass market acceptance [4,5].
In current mobile 3D scanning technology, a majority of the users, especially fashion consumers, would take advantage of the service of obtaining body measurements for the proper garment fit selection [4]. Additionally, a scanner with the ability to translate sizing between brands would be one of the most sought-after technology features [4]. This type of the scanner insinuates a high level of usefulness for diverse categories of retailers all the way from small boutique-style stores to large national department stores.
Unfortunately, there are issues surrounding the consistency of these relatively new mobile scanning tools [6]. These issues mostly border on human error due to the chosen positioning of the subject and the background the scan is performed in front of [6]. Most scanners could increase accuracy when less clothing is worn by the subject because the mobile scanners cannot perfectly distinguish between clothing and the human body. This leads to hesitance from users in fear of someone seeing them in minimal clothing and the potential of photos becoming public [7]. Because of this, users are less likely to wear appropriate clothing for the most accurate scans, increasing errors of their measurements.
Hence, it is challenging for fashion retailers to implement these types of technology in fear that their customers may get inaccurate measurements and recommendations made by the currently available technology, which potentially has issues with accuracy and speed. Even with the advancement of technology, there is still no reliable existing measurement software. Thus, there is a need for the development of technology to take the body measurements of individuals in an accurate, rapid, and comprehensive way. By improving the current challenges to overcome the technological limitations, industry experts anticipate mobile smartphone scanners to become a useful and cost-effective way to measure the consumer’s body size for the benefits of both retailers and consumers [2].
In addition, the convenience and affordable price point of mobile scanners in comparison with their larger, more expensive counterparts (e.g., 3D full-body scanners) offer a high feasibility of adopting such technology by the fashion industry [2]. Therefore, this also shows that the potential of multi-million dollar profits can be saved by reducing the probability of online returns [8]. The accurate body measurements can help consumer select accurate clothing size and fit, in addition to the precise body shape detection assessed previously [9]. Based on the current issue of taking body measurements, we are proposing a novel method to obtain body measurements using a smartphone in an accurate, fast, and portable way. We specifically focused on body measurements for pants, which clothing customers have fit issues with the most [10]. Body measurements of three areas, including the waist, lower hip and thigh, are used in this study.
2. Materials
Our body measurement data was collected following the IRB protocol (IRB2020-482) at Texas Tech University. According to this IRB, 12 subjects (10 male, 2 female) were recruited. The experimenter captured each subject’s frontal and lateral images using our developed smartphone application. After that, we asked the subjects to measure their preferred waistline, lower hip, and thigh circumferences using a measurement tape to obtain a gold standard reference for the purpose of comparing it to the estimation result of our proposed method. All the subjects’ data were deidentified following the IRB protocol to protect their privacy. From our assessment, neither the body shape nor any other parameters depended on the gender of the subject, as our method took only the silhouette of the subject based on their body measurements. Therefore, no discrimination between the subjects was performed based on gender. No identifiable parameter was obtained.
In our proposed method, a smartphone image was used to obtain body sizes from users with 2D images captured by a smartphone camera. Intra-observer (when two self-measurements were compared) and intra-observer (when a self-measurement was compared to the technician’s measurement) measurement errors ranged from 2 mm to 20 mm [11,12] when performing anthropometric measurements using a measurement tape. The error in waist measurement was obtained by following Equation (1) below:
(1)
In the proposed method, a 2 cm error in the height measurement for a person of a standard height (5 ft 8 in or 172.72 cm) introduced an error of 1 cm or 0.3937 in (= cm) for 34 in (86.36 cm), according to Equation (1). However, this error was much smaller compared with the technician measurement with intra-observer correlation of 0.96 and with inter-observer correlation of 0.93 [13], which could lead to a more reliable result compared with the calculation, estimation, and technical errors.
The original measurements were taken with a measurement tape while the subjects were wearing clothes. Specifically, the measurement procedure was performed on the clothing over the skin. When the body size was measured with the tape, the error was minimized by including three different readings for the same measurement and taking the mean of them. For the measurement process, the experimenter guided the subjects to acquire measurements from specific locations of the body by following the body’s landmarks. For example, the body areas including the iliac crest, narrowest abdominal part, and preferred waistline are the locations for the waist circumference measurement. Each subject was asked to stand in a standardized posture, and we used two silhouette images (front and side) to obtain the measurements with high accuracy [14].
3. Methods
3.1. Reference-Free Data Acquisition
There have been smartphone-based body measurements methods developed previously [15]. However, these methods did not provide high accuracy [14,16], resulting in errors from 9 mm to 42 mm and providing precisions of 0.7801 ± 0.0689 and recalls of 0.8952 ± 0.0995, respectively. Moreover, due to diverse smartphone brands with their own camera specifications and processing abilities in the current market and the complexity of 3D reconstruction, the conventional methods were not accurate [17].
There has been smartphone-based body measurement research conducted which used a subject’s personal credit card as reference. Specifically, Spector et al. [18] showed an approach of measuring the dimensions of a target object with a user device as a reference object. However, with the reference objects being small and not being invariant in size or available all the time, this approach is inconvenient. Moreover, the user is required to have that specific reference object when they want to measure their size. Our proposed method uses the subject’s height as a known parameter, and with the help of image processing and 3D reconstruction methods, it delivers an accurate result invariant to camera specifications and processing power, resulting in a better estimation of body size measurements. Our developed smartphone application flowchart is given in Figure 1.
3.2. Measurement Process
The body measurement process requires a reference object for the estimation of units from the number of pixels. Due to the unavailability of a universal reference object, we use the individual’s height as the reference. Therefore, the user of the application is prompted to insert his or her height before capturing the image. The pixel sizes are the same in both directions, which means that the pixels of the acquired images are squares [19,20]. Hence, the unit only calculated on one axis can be used for the other axis.
The image capture process is shown in Figure 2. A second person is needed to capture the image accurately, following the instructions stated in the application. The smartphone camera uses geometric calibration internally to discard lens distortion [21]. The smartphones, such as a Samsung Galaxy Note 10+, Pixel 3a, or Xiaomi Redmi Note 5 Pro, have an internal lens correction algorithm [22] in which distortion curves are represented as odd degree polynomials and correct them in a linear trend [23]. In this paper, we adopted the Pixel 3a phone, which uses this internal lens correction algorithm [24]. When the user captures the image, it is mandatory for the user to keep the phone level horizontally and vertically to the ground and fit his whole body inside the camera view. In this way, the height of the captured image is used as the reference, calculating the pixel-to-unit ratio by following Equation (2):
(2)
where H is the height (in inches) and l is the length of the captured image in pixels.3.3. Preprocessing the Captured Image
3.3.1. Image Grayscale
The captured image needs to be preprocessed in order to be used for measurement calculation. The image processing is done within the smartphone application without complexity. Therefore, the processing is fast, easy, and in real time.
The image processing technique incorporated in the smartphone application binarizes the image. Hence, the captured image is converted to a black-and-white image using a threshold. Effectively, the smartphone captures the image as an RGB image where R, G, and B represent the red, green, and blue channels, respectively, as shown in Figure 3. At first, the RGB image is converted into a grayscale image. Each color channel is converted to a specific value to determine the grayness of the converted image. Specifically, we used luminescence-based grayscaling. Luminescence matches the perception of human brightness and uses a weighted combination of the RGB channels. The luminescence method is used by standard image processing software. It is implemented by MATLAB’s “rgb2gray” function and used in computer vision applications [25]. Equation (3) represents the grayscale value of a specific pixel of the RGB image:
Gr = 0.299 × R + 0.587 × G + 0.114 × B,(3)
where Gr is the grayscale value and R, G, and B are the red, green, and blue channel values of the pixel, respectively.3.3.2. Otsu’s Thresholding Method
For binarizing the image, automatic thresholding is used. For this purpose, Otsu’s thresholding method is utilized due to its simplicity and application in smartphones [26]. The soft thresholding (Otsu) method relies on the image histogram to calculate the optimal threshold value [27]. The statistical separation of the pixels into two classes to binarize the image is done with the Otsu method, and the threshold value is obtained using moments of the first two orders (i.e., mean and standard deviation). The normalized histogram, h is expressed as:
(4)
where ni is the number of pixels at a particular illumination, N is the total number of pixels in the image, and L is the gray levels in the image. The mean () and standard deviation (σ) are shown in Equations (5) and (6), respectively:(5)
(6)
The value of T that maximizes the function is considered to be the optimal threshold value. Thus, for binarization of a greyscale image using Otsu’s method, the threshold value, T is determined as follows:
(7)
where, is the mean level of the image, and is the inter-class variance.The optimal threshold value is used for the grayscale value of the specific pixel index to obtain the binary image. If the grayscale value of each pixel is more than the threshold value, the pixel is converted to white; otherwise, it is converted to black. By calculating the whole image, the grayscale image is converted to a black-and-white image, making each pixel black or white. This way, image binarization is performed. Figure 4 shows the binarization process.
3.3.3. Background Removal
In our proposed algorithm, the background of the acquired image is removed, and only the image of the subject is extracted by taking another image without the subject. The images are converted to a grayscale, and the subject is extracted by subtracting the background from the image with the subject. The proposed algorithm can measure subjects’ body sizes even when the subjects have lighter clothes than the background. Specifically, the subjects were not asked to wear any specific-colored clothing, and the subject area was extracted using our proposed background removal method. Equation (8) is used in the background removal algorithm:
(8)
where is the grayscale image with the subject, is the grayscale image of the background, and is the silhouette image of the subject. The extracted image is then processed using our proposed image processing methods to extract the region covered by the subject. Hence, the process is invariant to the color of the clothing that the subject is wearing. Figure 5 shows the proposed background removal process.For the image registration procedure in this paper, the two images must be registered onto each other. Because image capturing by the smartphone will mostly be done by another person rather than a stable tripod, the captured images will certainly have geometrical transformation (e.g., translation, rotation, or affine transformation). For the image registration, we used speeded up robust features (SURF) [28]. SURF detects the feature points (blobs, edges, or corners) from the two images and matches them onto each other. The image registration process includes feature detection, matching, transformation, and image warping and blending [29]. In our proposed method, we used the SURF feature point detection algorithm. The detected features were matched to determine the overlapping regions between the two images. False matched pairs were estimated using the RANSAC algorithm [30,31]. Finally, the background image was warped into a mosaic image and blended to the subject image using the multi-band blending algorithm [32]. In our proposed method, we kept the subject’s frontal image fixed, and the subject’s lateral image, as well as background image, was transformed to be registered on the fixed image. The image registration process is shown in Figure 6.
3.4. Obtaining Measurements
To obtain the body measurements of the individual, a unique method was applied. The human body follows a proportion. There is no particular way of determining the waistline’s location, and it depends on personal preference. However, it is considered to be around the level of the umbilicus [33,34]. In our proposed method, we followed the ISO guideline [35] to determine the waistline’s location, as there is no standard landmark for the waistline and it depends on the subject’s anatomy and perception [36]. According to the human body ratio, the waist of an individual is approximately at three eighths of their height, and visually aesthetic clothing should have an upper body to lower body ratio of 0.372:0.618 [37]. Similarly, the lower hip region is at half of the height, and the thigh is at five eighths of the height. Different studies incorporated a similar approach for human detection in images and videos [38] as shown in Figure 7.
There are exact sites for locating the waistline [39] and guidelines for locating the exact waistline position (e.g., ISO [35]) based on body landmarks. However, users with different body shapes or obesity may have trouble measuring their waistlines. Hence, in this proposed method of body size measurement, we introduced a unique way of locating body landmarks based on body proportion which defines the exact position of the waistline at three eighths of the subject’s height. Nonetheless, the waistline location relies on personal preference. Different people of the same height may have different waistline preferences [40]. Studies show that there is strong correlation between the preferred waistline and the ISO-mentioned waistline [36].
In our proposed method, we incorporated three parameters to predict users’ preferred waistline heights: (1) height (x-axis values in Figure 8b,c), (2) the waistline based on body proportion (three eighths of the height), and (3) the height of the narrowest abdominal part. The individual preference for the height of the waistline was obtained from nine subjects.
The subjects were asked to locate their preferred waistline and measure the height of their preferred waistline (red marks in Figure 8b) from the ground using a measurement tape. (1) The subject’s height was obtained as user input. Using the developed SmartFit Measurements application, (2) the waistline location from the body proportions and (3) the narrowest abdomen locations were obtained from calculation and image processing, respectively. A regression curve was obtained using a neural network to predict the user’s preferred height of the waistline from the three input parameters. A two-layer, 50 hidden units feedforward neural network was trained with a learning rate of 0.00005 and a maximum epoch number of 50,000 to obtain the regression curve using a mean squared loss function. The regression curve resulted in a mean error of 0.8570 inches and standard deviation of 0.7914 inches with an R2 value of 0.75021. Using image processing techniques, the waist circumference was obtained at the preferred waistline. Figure 8a shows the neural network model, and Figure 8b shows the regression curve which was used to obtain the user’s preferred height of the waistline. A two-degree polynomial fitted regression curve was also obtained for comparison, resulting in an R2 value of 0.8947 as shown in Figure 8c. The regression curve obtained by a neural network showed a bias lower than the original height of the preferred waistlines because of the trend of the two input parameters (2. the waistline location, and 3. the narrowest abdomen locations, mentioned above).
For the lower hip and thigh circumference measurements, we relied on the definition of the hip from ISO [35] when taking the original measurement. According to the definition, the lower hip circumference was measured around the buttocks area at the level of the greatest lateral projections horizontally while the subject is in a standing position. The highest thigh position was measured for the thigh circumference. The SmartFit Measurement application uses image processing techniques to obtain the lower hip circumference (by taking the maximum width of the silhouette image of the subject in the lower body region) and the body proportion algorithm to obtain the thigh circumference (at five eighths of the body height from the top of the image).
By using the binarized image, the measurements were then obtained. For the measurements, the obtained region was converted to a silhouette image. At the preferred waistline position of the subject’s image across the width, the pixel values were calculated. Each value of the white pixels was 256, and each value of the black pixels was 0. Therefore, the sum of the pixel values represents the white rows of the complemented image. When divided by 256, the number of pixels covering the human body on that row is obtained. Figure 9 shows the pixel representation of the binary image and the process of body measurement.
3.5. 3D Reconstruction of the Measurement Areas
In our proposed body size measurement technique, we focused on three measurements, namely the waist, lower hip, and thigh circumferences. From the frontal and lateral measurements obtained, the waist, lower hip, and thigh areas were calculated, which were considered to be ellipse-shaped [15]. The axis lengths were obtained, and using Equation (9), the approximated values for the measurements were calculated. Figure 10 shows the cross-sectional area and measurement for the waist, lower hip, and thigh regions of a subject. Equation (9) was used to determine the body measurements from the image:
(9)
where a and b are the long- and short-axis lengths, respectively, and C is the circumference of the regions, as shown in Figure 10.The measurements were then incorporated with an existing 3D mannequin to emulate the 3D reconstruction of the user’s body as shown in Figure 11.
3.6. Developed Smartphone Application
A smartphone application, SmartFit Measurement, was developed on the android platform [41]. The application runs on Android 7 and later versions. Specifically, the Google Pixel 3a android smartphone was used to obtain the measurements with the developed SmartFit Measurement application. The application was also tested on other android phones (e.g., the Xiaomi Redmi Note 5 Pro and Samsung Galaxy Note 10+) with the same measurement accuracy. The application is able to run on smartphones with android 7.0 (Nougat) [42] with a minimum of 2 GB of RAM and a 1.3 GHz quad core processor. The developed smartphone application follows each of the steps internally. The image processing and machine learning steps are therefore incorporated. Figure 12 shows each of the steps of using the application and the calculated measurement of the individual.
4. Results
A total of 12 subjects, as volunteers, were included in the study (as mentioned in Section 2. Materials). Each of the subjects used the application to measure the body measurements. On top of that, original measurements were also obtained by a manual process (i.e., tape measurement at their preferred waistline, lower hip, and thigh regions). The three subsequent measurements were waist, lower hip, and thigh circumferences, respectively. The results are mentioned in Table 1.
Data acquisition was performed considering the different kinds of clothing that the subjects wore. For example, subjects wearing loose clothing, faded jeans, camouflage clothing, or light-colored clothing were also included in this study. Figure 13 shows the bar chart of the original and obtained measurements.
Our proposed technique resulted in 95.59% accuracy with a standard deviation of error of 1.898 inches from the obtained data (mentioned in Table 1) using Equations (10) and (11) below, respectively:
(10)
(11)
Performing a paired t-test, we have the evidence to believe that there is no significant difference between the original and obtained measurements. At a 95% confidence interval, the error remains from −0.72 to 0.34 inches with a margin of error of 0.5346 inches. Discrepancies in measurements may arise due to the clothing preference, as some of the subjects felt comfortable in loose clothing. A total of three subjects wore loose clothing. The accuracy of the measurements for the subjects wearing loose clothing was 93.49%, whereas subjects wearing fitted clothing resulted in an accuracy of 96.17%. Hence, the obtained accuracy was 2.7% lower due to the error for loose and faded color clothing compared with fitted clothing. Specifically, the accuracy with 10 males was 94.90%, while the accuracy with 2 females was 96.73%.
5. Discussion
The proposed smartphone application in this study uses a unique method to obtain the measurements which is more convenient than the existing 3D reconstruction-based time of flight (ToF) camera methods, as shown in Table 2.
In this paper, we have explored the usability of smartphones in online shopping for fashion garment products, especially pants, as the impact of smartphone-based mobile shopping has yet to be explored [45,46]. Our proposed smartphone application is expected to perform in other mobile devices (e.g., tablets) or PCs in the future. As for future work, the effect of posture variability on the accurate extraction of the measurements will be considered. The developed smartphone application, named SmartFit Measurement, has the adaptability to be linked to any manufacturer product database, and using a specific garment suggestion algorithm, suggested clothing can be provided to the consumers, which is expected to be evaluated in our future work.
6. Conclusions
In this paper, we have assessed the usability of an easy, convenient, and accurate smartphone-based method to measure the body measurements of an individual. This application has the potential to solve the garment fitting issues with the current methods available and to provide a better alternative in the market by using a unique algorithm to obtain body measurements. Furthermore, this could be an essential way to reduce product returns due to incorrect fitting, which has been a huge pain for the fashion industry. With an accuracy of 95.59%, this solution is expected to replace the existing methods for a convenient garment shopping experience for consumers and increase revenue for the online apparel e-commerce industry. Our future work is to measure the different areas of the body, as well as to connect the body measurement data collected from our method to the garment data for accurate garment fit detection.
Author Contributions
K.H.F. designed the analysis, developed the software, wrote the original and revised manuscript, and conducted data analysis and details of the work; H.-J.C. designed the research experiment, verified data, and conducted statistical analysis; F.B. verified the data and analysis; and J.-W.C. conceptualized and designed the research experiment, wrote the original and revised drafts, designed, redesigned, and verified the image data analysis, and guided the direction of the work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted following the IRB protocol (IRB2020-482) at Texas Tech University.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Conflicts of Interest
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures and Tables
Figure 2. Image capture process with the smartphone from a (a) frontal and (b) lateral view. The subject is asked to capture the image with their head and toes touching the top and bottom of the camera view, respectively.
Figure 3. RGB representation of the smartphone image. (a) Red, green, and blue planes of the RGB image. (b) Bit representation of the RGB image.
Figure 4. Otsu’s thresholding for image binarization. (a) Grayscale image, (b) image histogram, and (c) binarized image using an optimal threshold of T = 3.
Figure 5. Background removal process. (a) Background image, (b) image with the subject, and (c) the subject’s image with the background removed.
Figure 6. Background removal process. (a) Image registration using SURF feature points. (b) background registered onto subject’s image.
Figure 7. Human body ratio (The waist, lower hip, and thigh areas are indicated).
Figure 8. (a) Neural network model for estimating the heights of the waistlines of the users’ respective preferences and a regression curve obtained for prediction of the waistline using (b) a neural network and (c) polynomial fitting. Here, the red circles are the waistlines preferred by the individual subjects, and the blue and green curves are the regression curves, which were used to obtain the predicted waistline of the user’s own preference.
Figure 9. The binarized image of a subject. The lower hip measurement is calculated by the area in black.
Figure 10. Approximated area of the measurements. The waist, lower hip and thigh areas are considered to be ellipses.
Figure 11. 3D reconstruction from two 2D images of the user. (a) Frontal view. (b) Lateral view.
Waist, lower hip, and thigh measurements for 12 subjects.
Subject | Original Measurement | Obtained Measurement | Error (%) |
---|---|---|---|
1 | 33.0 | 33.0 | 0.0 |
37.0 | 38.0 | 2.7 | |
20.0 | 24.0 | 20.0 | |
2 | 32.0 | 32.5 | 1.5 |
39.0 | 39.7 | 1.9 | |
20.0 | 20.9 | 4.9 | |
3 | 35.0 | 35.0 | 0.0 |
38.0 | 36.7 | 3.4 | |
20.0 | 18.6 | 6.7 | |
4 | 37.0 | 36.7 | 0.8 |
38.0 | 33.5 | 11.8 | |
20.0 | 22.6 | 13.3 | |
5 | 38.5 | 39.4 | 2.3 |
40.0 | 41.7 | 4.2 | |
23.0 | 22.7 | 1.3 | |
6 | 37.0 | 37.3 | 0.8 |
42.0 | 38.1 | 9.2 | |
22.0 | 20.3 | 7.4 | |
7 | 30.0 | 30.6 | 2.0 |
38.5 | 38.1 | 0.8 | |
21.0 | 22.8 | 8.5 | |
8 | 45.0 | 42.0 | 6.6 |
46.0 | 43.2 | 6.0 | |
29.0 | 28.3 | 2.1 | |
9 | 36.5 | 38.1 | 4.5 |
39.5 | 38.9 | 1.4 | |
21.5 | 20.7 | 3.7 | |
10 | 38.0 | 39.7 | 4.4 |
39.5 | 39.0 | 1.2 | |
20.5 | 21.8 | 6.3 | |
11 | 36.0 | 39.7 | 10.2 |
38.0 | 39.0 | 2.6 | |
20.0 | 21.8 | 9.0 | |
12 | 34.0 | 36.9 | 8.5 |
38.0 | 37.6 | 1.0 | |
21.0 | 21.3 | 1.5 |
Comparison of the proposed method with existing methods for body size measurements.
Xiaohe et al. [43] | Apeagyei et al. [44] | Proposed Method | |
---|---|---|---|
Feature | 3D reconstruction-based focal point | 3D reconstruction | Measurement form 2D image |
Complexity | Complex | Complex | Simple |
Implementation | Virtual try on machine | Camera array | Smartphone |
Accuracy | 95.72% | - | 95.59% |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors.
Abstract
Measuring body sizes accurately and rapidly for optimal garment fit detection has been a challenge for fashion retailers. Especially for apparel e-commerce, there is an increasing need for digital and convenient ways to obtain body measurements to provide their customers with correct-fitting products. However, the currently available methods depend on cumbersome and complex 3D reconstruction-based approaches. In this paper, we propose a novel smartphone-based body size measurement method that does not require any additional objects of a known size as a reference when acquiring a subject’s body image using a smartphone. The novelty of our proposed method is that it acquires measurement positions using body proportions and machine learning techniques, and it performs 3D reconstruction of the body using measurements obtained from two silhouette images. We applied our proposed method to measure body sizes (i.e., waist, lower hip, and thigh circumferences) of males and females for selecting well-fitted pants. The experimental results show that our proposed method gives an accuracy of 95.59% on average when estimating the size of the waist, lower hip, and thigh circumferences. Our proposed method is expected to solve issues with digital body measurements and provide a convenient garment fit detection solution for online shopping.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX 79409, USA;
2 Department of Hospitality and Retail Management, Texas Tech University, Lubbock, TX 79409, USA;