Hello Everyone,
I am a polymer formulation specialist and I develop custom formulations for medical applications, and have be employing Bayesian optimization methods for the same. I am envisioning automating the preparation of polymer formulations at a lab scale scale, but one of the constraints I have is that I cannot change the configuration of the processing equipment without human intervention, and this is one of the key parameters that has influence on output properties such as morphology and micro phase separation. So if my inputs are material molecular structure, material formulation composition and processing conditions, the only way i can realistically automate is by fixing my equipment configuration. Will I be still getting good results if I do so, or anyone else has similar problems and has a unique work around? In my case, I can purchase more components for easy switching of configuration using a robot, but it would be quite expensive. I have enough expertise to say which configuration will work for which formulations.
1 Like
Hello Ravi, as I understand, you wish to know whether you can achieve ‘good’ results if the equipment setup were fixed, or whether there were workarounds to deal with the equipment limitations. I would qualify a high degree of reproducibility amongst your results as what you infer by ‘good’, and in that context, I believe fixing your equipment would be a good solution to start your campaign.
Additionally, you may treat the equipment condition as a categorical input parameter. Since you possess specific domain expertise, you can incorporate that prior knowledge here, allowing the optimiser to favour specific configurations instead of exploring all combinations blindly. Since changing the equipment configuration is labour-intensive, I would batch all the experiments per configuration to minimise the human intervention necessary. If the equipment does play a decisive role, I believe that the optimiser would catch on quickly and would only suggest experiments that utilise the optimum configuration.
As an example, I have some experience in this regard (Towards a greener electrosynthesis: pairing machine learning and 3D printing for rapid optimisation of anodic trifluoromethylation - RSC Sustainability (RSC Publishing) DOI:10.1039/D3SU00433C), where we used Latin Hypercube to initialise an ML-assisted optimisation campaign. We found that the setup took a while to reach the set temperature. Since we utilised Latin Hypercube to initialise the campaign and had a broad temperature range of 5–65 °C, we found that running the experiments in the order suggested by the code was highly time-consuming, considering we had numerous experiments to run for the campaign at hand. We sorted the experiments in order of increasing temperature, and instead of proceeding through the campaign in the order of the predictions, we moved in the order of increasing temperature. This saved us a significant bit of bother and allowed for a quick turnaround time for the campaign.
You may also consider simulations for modelling how the equipment configuration affects the results, if you have data from them. This might significantly reduce the labour needed on your part.