Daniil
Bash
ab,
Frederick Hubert
Chenardy
c,
Zekun
Ren
d,
Jayce J
Cheng
b,
Tonio
Buonassisi
de,
Ricardo
Oliveira
f,
Jatin N
Kumar
b and
Kedar
Hippalgaonkar
*bc
aDepartment of Chemistry, National University of Singapore, 3 Science Drive, Singapore, 117543, Singapore
bInstitute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), #08-03, 2 Fusionopolis Way, Innovis, Singapore, 138634, Singapore
cDepartment of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Block N4.1, Singapore, 639798, Singapore. E-mail: kedar@ntu.edu.sg
dSingapore-MIT Alliance for Research and Technology, 1 Create Way, #10-01 & #09-03 CREATE Tower, Singapore, 138602, Singapore
eDepartment of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
f2D Materials Pte Ltd (2DM), Singapore
First published on 27th January 2022
Functional composite thin films have a wide variety of applications in flexible and/or electronic devices, telecommunications and multifunctional emerging coatings. The rapid screening of their properties is a challenging task, especially with multiple components defining the targeted properties. In this work we present a platform for accelerated automated screening of viscous graphene suspensions for optimal electrical conductivity. Using an Opentrons OT2 robotic auto-pipettor, we tested 3 most industrially significant surfactants – PVP, SDS and T80 – by fabricating 288 samples of graphene suspensions in aqueous hydroxypropylmethylcellulose. Enabled by our custom motorized 4-point probe measurement setup and computer vision algorithms, we then measured the electrical conductivity of every sample and identified that the highest performance is achieved for PVP-based samples, peaking at 10.8 mS cm−1 without annealing. The automation of the experimental procedure allowed us to perform the majority of the experiments using robots, while the involvement of human researchers was kept to minimum. Overall the experiment was completed in less than 18 hours, only 3 of which involved humans.
A common way to overcome this challenge is the use of surfactants.4,12 Surfactants help to separate graphene flakes from each other and prevent their aggregating, while also significantly increasing their affinity to the solvent.26,27 However, excessive quantities of surfactant drastically reduce the electrical conductivity of the resulting device since individual graphene flakes are shielded by surfactant molecules and cannot form an efficient path for charge carriers to flow through.28 Hence, finding an optimal type and concentration of the surfactant is a key challenge for industry as these properties have a critical impact on the performance of the final device, bill of materials (BOM), and overall cost of manufacturing. Several studies have been carried out to identify the best surfactants for graphene dispersion in various solvents.26,28–31
Surfactants like sodium dodecylsulphate (SDS), polyvinylpyrrolidone (PVP) and tween 80 (T80), among several others, have been shown to have the most optimal properties for graphene dispersion.32–35 Industrially, the optimization of graphene–surfactant formulation is of major importance as described above. To our knowledge, very few studies have explored the parameter space of graphene–surfactant mixtures in full or performed detailed characterization of the conductivity profiles of the samples. This can partially be explained by the fact that traditional manual methods of sample preparation are inadequate for covering a large parameter space. Therefore, we thought it necessary to develop a robust high-throughput method for the automation of liquid sample preparation, drop-casting and thin film characterization. The use of robotics and automation allows for highly reproducible, systematic fabrication of hundreds and thousands of samples with almost no human intervention.
In this work, we develop a methodology for the fabrication of graphene–surfactant mixtures of various ratios in an automated fashion, thin film preparation and accelerated characterization. To achieve that, we used an Opentrons OT2 auto-pipettor (Fig. 1a) to perform an exhaustive search of the full parameter space for ternary mixtures of graphene, hydroxypropylmethylcellulose (HPMC) and each of the 3 surfactants: PVP, SDS and T80. We used custom python-based software to generate the design of experiment (DoE) in csv format, which was used to provide instructions for the auto-pipettor for mixing and drop-casting.
An advantage of our approach is the robustness of the system for a varied range of viscous solutions. Specifically, we used HPMC, a common rheology modifier, to mimic the viscosity and the film-formation characteristics of an ink/paint.36 Despite the added complexity of our control software, the auto-pipettor was able to handle viscous liquids without affecting the quality of the samples or the reproducibility of the experiment. Importantly, we significantly improved the efficiency and the throughput for the fabrication of the samples, which was less than 2 minutes per sample, including mixing of the stock solution to obtain the desired graphene–surfactant ratio, and subsequent drop-casting, or approximately a 2-5-fold increase of the throughput compared to the manual process (Fig. 1). The involvement of human researchers was limited to loading samples in and out of the autopipettor and checking the calibration during the fabrication stage and to positioning the 4pp tool above samples and post-processing the image recognition data during the measurement stage.
The rapid fabrication of test samples without a robust methodology for high-throughput characterization of these samples would have little benefit, especially for the use of supervised learning algorithms. Therefore, we have developed automated characterization techniques, which involve a 4-point probe (4pp) for automated full IV measurement, as well as a computer vision-based algorithm for thickness approximation (Fig. 1c and d respectively). The automation of the 4pp measurement has an added advantage of causing minimal damage to the samples and is highly reproducible due to the computer vision-based indexing of the samples' positions.
Traditionally, in manual measurements the probe is pressed into the sample by hand and is held there by the researcher until the measurement is done, which takes a few minutes, in addition to positioning and setting up the instrument. However, it is almost impossible to hold the probe at exactly the same angle and at constant force during this procedure. Hence, the samples could be damaged by probe slipping, and the results of measurement might not be consistent. In contrast, the automated manipulation of the probe allows us to always apply uniform pressure and at exactly the right angle, minimizing the damage to the sample and yielding highly reproducible results, while freeing up the hands and mind of the researcher. Some of the automated 4pp measurements involve the use of refurbished 3D-printers with all open-source components, described by Handy Chandra et al.;37 the use of micro electro-mechanical system (MEMS) based micro- or nano-4pp for in situ measurements of large material libraries, developed by Alfred Ludwig et al.;38 the use of an automated 4pp stage for measuring the Hall effect, as described in work by Rudolf Kinder et al.,39 among others.
In contrast, the advantages of our method of 4pp measurements include the use of off-the-shelf components, which are commonly in possession of research groups that are involved in thin film fabrication and characterization, and only minimal requirement of coding or electrical engineering skills, as compared to the majority of alternative approaches from the literature. Our approach, on par with complementary techniques, allowed us to increase the throughput to approximately 1 minute per sample, where a full wafer with 49 samples is automatically characterized in ∼45 minutes including detection, alignment and measurement, which is 3–5 times faster than manual measurements (Fig. 1c). The obtained IV curves were then converted into sheet resistance and were used for further processing in this study.
The next automation step of our methodology is the computer vision-based thickness approximation algorithm. It was used for detecting the exact outline of the sample for calculating its area in pixels, which was then correlated with the true area of reference samples – 2 black circles with a known area printed on A4 paper, placed underneath the wafer (Fig. 1b and d, bottom right corner). Thereafter, we were able to calculate the thickness of the samples, based on known dispensed volume and concentration for every sample. These thickness data, combined with sheet resistance data, were used to calculate the property of interest – electrical conductivity. Overall, the full workflow, including the fabrication and full characterization of all 288 samples, was completed in ∼18 hours spread across 3 days, out of which ∼15 hours were fully automated, thus taking a focused human time of only 3 hours. It is worth mentioning that the majority of the fabrication time was spent on programmed 3 seconds delays to let the viscous HPMC to flow in and out of the pipette tip. When applied to non-viscous solutions, the same protocol is completed within10 minutes, compared to 3.5 hours for viscous ones.
This ability to perform experiments rapidly allows the use of dispersions that are stable over a period of only a few hours, which enables the researchers to broaden the parameter space and go beyond the compositions of infinitely stable dispersions. This opens a plethora of opportunities to explore large parameter spaces of increasing complexity in a high-throughput manner, enabled by a combination of automated mixing, drop-casting, and testing systems.
While working with the HPMC solution, to account for its high viscosity, the robot was programmed to take a 3 seconds pause after every aspiration and dispensing operation, to allow the pressure in the pipette tip to stabilize. Another necessary added process step was the dipping of the pipette tip into the HPMC solution to a depth of <2 mm from the meniscus, to minimize the HPMC's adhesion to the outer walls of the pipette and prevent interference with the determined dispense volume.
Atrue = kApixel | (1) |
(2) |
The obtained coefficients were averaged and applied to the rest of the samples on the wafer to obtain the value of the true area. Then, combining it with a known dispensed volume of each sample, we calculated the thickness of the samples. To make this link a few key assumptions are made: (1) the volume of the drop-cast samples is identical for all samples, down to the technical limitations of the Opentrons; (2) every sample has an identical total concentration of solids, down to the technical limitations of analytical balances and neglecting minute errors during the transference of solutions; (3) the surfaces of the samples are flat and uniform, acknowledging that this assumption is the biggest source of error, but still accurate enough to be used for screening purposes. It is worth mentioning that the calculated densities of all samples were found to be identical down to the second significant decimal point. We then compared our calculated thickness data to those of randomly selected 7% of samples, measured on a surface profiler. Note that the surface profilometry has its intrinsic errors, since it requires manual ‘scratching’ in the center of the sample and is by nature a single-line measurement, which is then extrapolated to the whole film. The mean absolute percentage error (MAPE) between the calculated thickness using the above assumption and measured thickness was found to be within 7–11% (Fig. 2), hence supporting the feasibility of this method for high-throughput screening of thickness for large quantities of electronic composites.
An integral component of the optical thickness measurement technique is the implementation of computer vision-based (OpenCV) algorithms. The OpenCV pipeline consists of a chain of algorithms including segmentation and contour detection as well as data processing of the individual sample area. A graphical user interface (GUI) was built in Python using the PySimpleGUI library, to allow flexible, real-time tuning of the key parameters, such as the segmentation threshold value, noise removal iterations, and distance transform mask size, to optimize the work of these algorithms.
Following the segmentation and detection of the films, each sample is index-labelled with an integer value, which is sorted from top-left to bottom-right. The contour features and moments of each film are then analyzed to produce the area data in pixels.
Based on the area pixel values, the thickness of a given film can be calculated by relating to the pixel values of 2 reference samples of known sizes (see eqn (1) and (2)). To quantify the error between the calculated thickness and thickness obtained from the surface profiler, we fit the calculated thickness to the area data and compute the MAPE for thickness values between the fitted sample and profilometry measured samples with different surfactants (PVP-based, SDS-based, and T80-based). As shown in Fig. 2, for all 3 cases, the MAPE is below 11%, which is acceptable for the high throughput proxy measurements. We use a Gaussian Process (GP) model to reproduce the equation that relates area to thickness values. The GP model is chosen as it can account for potential white noise errors that could arise from the computer vison algorithm and gives an uncertainty estimate. The GP kernel implemented here consists of a Radial Basis Function (RBF) kernel and a Whitenoise Kernel.
The results of our study show that films made using PVP as the surfactant are the most conductive, with the highest value of conductivity of 10.8 mS cm−1 (Fig. 3a), which is at least 2 orders of magnitude higher the current state of the art for similar material systems.44 Interestingly, the conductivity of PVP-based films does not show dependence on the concentration of PVP, unlike films made with other surfactants. SDS-based films show low conductivity values under most of the tested conditions, except for the samples with a high concentration of graphene, as seen in Fig. 3b. The concentration of SDS showed no significant influence on the device performance. The worst conductivity values were observed for films made using T80, where the highest observed conductivity was only 0.06 mS cm−1, as shown in Fig. 3c, which is about 3 orders of magnitude lower than that for PVP-based samples. The majority of the measured T80-based samples show conductivity values too low to be distinguished from instrument noise.
We speculate that PVP shows the best results due to the higher affinity of its aromatic benzene rings to the aromatic rings of graphene, as compared to other surfactants used.45 Also, due to its polymer nature as compared to SDS, PVP could promote better alignment of graphene sheets and, therefore, increase the overall conductivity of the sample. SDS, on the other hand, despite showing mediocre conductivity results, could stabilize the dispersion at 4 times lower concentration compared to the other tested surfactants. This feature could be significant for specific industrial applications, e.g. high conductivity (10−1 to 102 S cm−1) for paint-on sensors or low conductivity (10−8–10−4 S cm−1) for anti-static coatings, as it could provide significant cost savings.
We explain the poor performance of the T80-based samples not by the inherit incompatibility of this surfactant as a stabilizing agent for graphene dispersions, but by excessively aggressive sonication parameters. The conditions used to sonicate the dispersions could have damaged the polymer chains to the point that the T80 lost most of its properties as a surfactant. The backbone of T80 is comprised of ester C–O bonds, which are weaker in nature, in contrast to C–C bonds in PVP. Future experiments will aim to investigate this assumption, by using a series of more mild sonicating conditions with T80-based composites.
Further analysis of the surface of the composites using Scanning Electron Micrographs (SEMs) revealed that the distribution of graphene inside the HPMC matrix is relatively uniform at the lowest surfactant load and highest and medium graphene loads (Fig. 4a–l respectively), suggesting the effectiveness of our dispersion and fabrication procedures. The scanning electron microscopy (SEM) images reveal that the graphene islands inside the composites are relatively dense; however, to test how much impact the separation of these islands by HPMC inside the composite has on the performance of the composites, we decided to anneal the samples beyond the decomposition temperature of HPMC and repeat the electrical measurements and SEM characterization.
The same samples used for SEM imaging of samples without any post-processing were used to test the effect of annealing. The annealing was done in a vacuum furnace. First, the samples were brought from room temperature to 250 °C at 10 °C min−1, and then from 250 °C to 500 °C at 1 °C min−1 and held at 500 °C for 120 min to completely burn off the HPMC binder.
After annealing, the thickness and sheet resistance of the samples were measured again. We observed a conductivity increase up to 7 orders of magnitude, suggesting that with the removed polymer matrix, graphene flakes came into close contact with each other to form a dense percolation network. The highest increase in conductivity is observed for, expectedly, T80-based samples, because they were barely conductive prior to annealing, while for SDS-based samples the increase in conductivity is the lowest, most likely due to the lower influence of annealing on the stability of the surfactant molecule. As seen from the images in Fig. 4(m–x), the morphology of the annealed samples does not seem to depend on the surfactant, nor on the initial concentration of graphene inside the composite, as compared to untreated samples.
In principle, image analysis with computer vision algorithms can be performed on these SEM images to extract features that could be linked to electrical conductivity. However, a large number of images would have to been taken to provide a comprehensive dataset and hence we did not attempt this in the current manuscript. Although, one could envision undertaking such analysis in future work.
We also speculate that the introduction of simple automation into sonication and annealing processes could unlock another vast and sparsely sampled parameter space for studying the performance of composites that are made from dispersions. The slow and sequential nature of these steps was the main limiting factor for us not to try various sonication or annealing conditions. The community would greatly benefit from developments in automation protocols for these key processes.
In conclusion, using this approach, we fabricated 288 samples in this study, with a goal to identify the best surfactant to be used for graphene dispersions, in less than a week of experimental work, most of which was done by robots. Hence, we demonstrate the viability and applicability of automation tools to scientific experiments, especially the ones which rely on many repetitive operations and exploration of vast parameter spaces. These techniques can free up most of the human workhours from the experiments, and delegate the tedious work to robots. Our work demonstrates the push towards automation in science laboratories, where human researchers are engaged in creative scientific work and planning of experiments, while execution is delegated to robots, machine learning algorithms and efficient high-throughput experimentation and analytical tools.
This journal is © The Royal Society of Chemistry 2022 |