This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Towards Efficient Modularity in Industrial Drying: A Combinatorial Optimization Viewpoint

Alisina Bayati1a, Amber Srivastava2, Amir Malvandi3, Hao Feng4 and Srinivasa M. Salapaka1b 1Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 61801 IL, USA. aabayati2@illinois.edu, bsalapaka@illinois.edu2Automatic Control Laboratory, Swiss Federal Institute of Technology (ETH Zurich), Physicstrasse 3, 8092 Zurich, Switzerland. asrivastava@ethz.ch3Department of Agriculture and Biological Engineering Sciences, University of Illinois at Urbana-Champaign, 61801 IL, USA. 3amirm2@illinois.edu3North Carolina Agricultural and Technical State University, 27411 NC. hfeng@ncat.eduThis work was supported by the U.S. Department of Energy under award DE-EE0009125 and NCCR Automation (grant number 180545) funded by the Swiss National Science Foundation.
Abstract

The industrial drying process consumes approximately 12% of the total energy used in manufacturing, with the potential for a 40% reduction in energy usage through improved process controls and the development of new drying technologies. To achieve cost-efficient and high-performing drying, multiple drying technologies can be combined in a modular fashion with optimal sequencing and control parameters for each. This paper presents a mathematical formulation of this optimization problem and proposes a framework based on the Maximum Entropy Principle (MEP) to simultaneously solve for both optimal values of control parameters and optimal sequence. The proposed algorithm addresses the combinatorial optimization problem with a non-convex cost function riddled with multiple poor local minima. Simulation results on drying distillers dried grain (DDG) products show up to 12% improvement in energy consumption compared to the most efficient single-stage drying process. The proposed algorithm converges to local minima and is designed heuristically to reach the global minimum.

I Introduction

Industrial drying is responsible for roughly 12% of the total end-use energy used in manufacturing, equivalent to 1.2 quads annually [1]. The US Department of Energy estimates that by implementing more efficient process controls and new drying technologies, it is possible to reduce this amount by approximately 40% (0.5 quads/year), resulting in operating cost savings of up to $8 billion per year [2]. Moreover, the drying process has a significant impact on the quality of food products. Prolonged exposure to excessive heat can have negative effects on the physical and nutritional properties of the products [3].

In recent years, several more efficient drying technologies have been proposed in the literature, such as Dielectrophoresis (DEP) [4], ultrasound drying (US) [5][6], slot jet reattachment nozzle (SJR) [7], and infrared (IR) drying [8]. These technologies have helped improve product quality and energy efficiency. Industrial drying units typically use one of these technologies to achieve their drying goals. However, each technology performs with different efficiencies in different settings. Depending on the operating conditions, some technologies may be more favorable than others. For example, contact-based ultrasound technology is more effective in the initial phase of the process, where the moisture content of the food sample is relatively high, while pure hot air drying consumes less energy and is more effective when the moisture content is low. By combining these two processes, it is possible to take advantage of both technologies and compensate for their inefficiencies. Therefore, understanding (a) the sequence in which different drying techniques should be used, and (b) the operating parameters of each technology, can help us maximize their capabilities. Dividing the drying process into sub-processes that use different drying methods and operating conditions can help alleviate their individual limitations.

Refer to caption
Figure 1: Schematics of the continuous smart dryer prototype with seven buckets which accommodates multiple drying technologies to achieve better performance. In the example shown above, two DEP, two ultrasound, one IR, and two SJR modules are used in a specific order.

To illustrate, let us consider the continuous drying testbed depicted in Fig.1, which includes several drying modules such as ultrasound, DEP, SJR nozzle, and IR technologies. Each drying module is controlled by a set of parameters that influence the amount of moisture removal. For instance, ultrasound power and duty cycle are the control parameters of the ultrasound technology, while electric field intensity is the control variable of the DEP module. Additionally, the control parameters of each dryer impact the amount of energy consumed during the process, creating a tradeoff between energy consumption and moisture removal. Therefore, to minimize the total energy consumed by the testbed while achieving the desired moisture removal, a combinatorial optimization problem can be formulated to determine the optimal order in which the drying modules should be placed in the testbed and the optimal control parameters associated with them.

Similarly, this approach can be extended to batch-process drying with some adjustments. For example, in the testbed shown in Fig.2, which is used for the batch drying process, each technology can be used more than once. It includes an ultrasonic module [5], a drying chamber with a rectangular cross-section, a blower, and a heater. The food sample is located on a vibrating sheet attached to the ultrasonic transducer and exposed to the hot air coming from the heater, allowing combined hot-air and ultrasound drying. In this setup, the problem of interest is to reduce energy consumption, if possible, by dividing the process into consecutive pure hot-air (HA) and combined hot-air and ultrasound (HA/US) sub-processes, each with different operating conditions.

Refer to caption
Figure 2: Schematics of the convective/ultrasound testbed for batch-process drying which can be used for both pure hot air (HA) and combined hot air and ultrasound (HA/US) processes. The HA mechanism consists of a blower and a heater, whereas the US mechanism is a vibrating sheet attached to the ultrasound transducer. One can switch from HA/US process to pure HA process by turning off the US transducer

Previous research in the field of drying has largely focused on improving the efficiency of existing drying methods or developing new technologies [9, 5, 7]. Some studies have used optimization routines such as the response surface method (RSM), a statistical procedure, to optimize process control variables using experimental data [10, 11]. However, there is limited literature that addresses optimization problems related to integrating different drying technologies through sequencing and parameter optimization. The primary contribution of our work is the modular use of multiple existing technologies to achieve cost efficiency with desired performance levels, while also allowing for optimal operating conditions that can vary over time, potentially improving performance even further. In our simulation results, presented in Section IV, we show up to a 12% reduction in energy consumption compared to the most efficient single-stage hot-air/ultrasound drying process, as well as up to a 63% improvement in energy efficiency compared to the commonly used optimal hot air drying method. Similar optimization problems can arise in various industrial processes that involve using a sequence of distinct devices with similar functions to form a unified process, such as the wood pulp industry with drying drums varying in radius and temperature, route optimization in multi-channel wireless networks with heterogeneous routers, and sensor network placement.

This paper introduces a framework based on the Maximum Entropy Principle (MEP) to model and optimize the various sub-processes in an industrial drying unit. These optimization problems pose significant challenges due to the combinatorially large number of valid sequences of sub-processes and their discrete nature. To address these issues, we assign a probability distribution to the space of all possible configurations. However, determining the optimal operating conditions of sub-processes alone is analogous to the NP-hard resource allocation problem, with a non-convex cost surface containing multiple poor local minima. Traditional algorithms like k-means often get trapped in these local minima and are sensitive to initialization. To overcome this, our algorithm uses a homotopy approach from an auxiliary function to the original non-convex cost function. This auxiliary function is a weighted linear combination of the original non-convex cost function and an appropriate convex function, chosen as the negative Shannon entropy of the probability distribution defined above. We start with weights that favor the negative Shannon entropy term, making the function convex and easily solvable. As the iteration progresses, the weight of the original non-convex cost increases, and the obtained local minima are used to initialize subsequent iterations. The auxiliary function converges to the original non-convex cost function at the end of the procedure. This approach is independent of initialization and tracks the global minimum by gradually transforming the convex cost function to the desired non-convex cost function.

II Problem Formulation

We formulate the problem stated above as a parameterized path-based optimization problem [12]. Such problems are described by a tuple

=M,γ1,,γM,η1,,ηM,D,\displaystyle\mathcal{M}=\langle M,\gamma_{1},\ldots,\gamma_{M},\eta_{1},\ldots,\eta_{M},D\rangle, (1)

where MM is the number of stages allowed, and γk\gamma_{k} denotes the sub-process chosen to be used in the kk-th stage. In particular,

γkΓk:={fk1,,fkLk}1kM,\displaystyle\gamma_{k}\in\Gamma_{k}:=\{f_{k1},\ldots,f_{kL_{k}}\}\quad\forall 1\leq k\leq M, (2)

where Γk\Gamma_{k} is the set of all sub-processes permissible in the kk-th stage. Moreover,

ηkH(γk)dγk1kM,\displaystyle\eta_{k}\in H(\gamma_{k})\subseteq\mathbb{R}^{d_{\gamma_{k}}}\quad\forall 1\leq k\leq M, (3)

where ηk\eta_{k} and H(γk)H(\gamma_{k}) denote the control parameters associated with the kk-th sub-process and its feasible set, respectively. D(ω,η1,,ηM)D(\omega,\eta_{1},\ldots,\eta_{M}) denotes the cost incurred along a path ω\omega, where ωΩ:={(f1i1,f2i2,,fMiM):fkikΓk}\omega\in\Omega:=\{(f_{1i_{1}},f_{2i_{2}},\ldots,f_{Mi_{M}}):f_{ki_{k}}\in\Gamma_{k}\} represents a sequence of sub-processes starting from the first stage to the terminal stage MM. The objective of the underlying parameterized path-based optimization problem is to determine (a) the optimal path ωΩ\omega^{*}\in\Omega, and (b) the parameters ηk\eta_{k}^{*} for all 1kM1\leq k\leq M that solves the following optimization problem

min{ηk},ν(ω)\displaystyle\min_{\{\eta_{k}\},\nu(\omega)} ωΩν(ω)D(ω,η1,,ηM),\displaystyle\sum_{\omega\in\Omega}\nu(\omega)D(\omega,\eta_{1},\ldots,\eta_{M}), (4)
subject to ωΩν(ω)=1,ν(ω){0,1}\displaystyle\sum_{\omega\in\Omega}\nu(\omega)=1,~{}~{}\nu(\omega)\in\{0,1\}
ηkH(γk)1kM,\displaystyle\eta_{k}\in H(\gamma_{k})\quad\forall 1\leq k\leq M,

where ν(ω)\nu(\omega) determines whether or not the path ω\omega has been taken. In other words,

ν(ω)={1if ω is chosen0otherwise.\displaystyle\begin{split}\@add@centering\nu(\omega)=\begin{cases}1\quad\text{if $\omega$ is chosen}\\ 0\quad\text{otherwise.}\end{cases}\centering\end{split} (5)

Fig. 3 further illustrates all the notations defined, for the exemplary process shown in Fig. 1.

One approach to address the optimization problem stated in (4) is to solve each objective separately. However, in this approach, the coupledness of the two objectives is not taken into account which may result in a sub-optimal solution. On the other hand, our MEP-based approach aims for solving the two simultaneously.

Let us reconsider the batch-process drying example described earlier, where the testbed allows up to MM different sub-processes, each could be either HA or HA/US. To pose the problem of interest as a parameterized path-based optimization problem, we define

Γk:={0,1}1kM,\displaystyle\Gamma_{k}:=\{0,1\}\quad\forall 1\leq k\leq M, (6)

in which 0 and 1 indicate HA and HA/US sub-processes, respectively. Thus, the process configuration ωΩ\omega\in\Omega would become

ω=(γ1,γ2,,γM),γk{0,1}1kM,\displaystyle\omega=(\gamma_{1},\gamma_{2},...,\gamma_{M}),\quad\gamma_{k}\in\{0,1\}\quad\forall 1\leq k\leq M, (7)

The control parameters of both sub-processes, in this case, are residence time tt and air temperature TT. The heater of the setup in Fig. 2 is designed to keep the air temperature between 30C30^{\circ}C and 70C70^{\circ}C. Also, considering the settling time of the air temperature, it is required for all sub-processes to take at least t0=2t_{0}=2 minutes. Hence,

ηk=[tkTk]U1kM,\displaystyle\eta_{k}=\begin{bmatrix}t_{k}\\ T_{k}\end{bmatrix}\in U\quad\forall 1\leq k\leq M, (8)

where UU is defined as below and denotes the set of all admissible control parameters.

U:={[tT]2:t2mins,T[30,70]C}\displaystyle U:=\left\{\begin{bmatrix}t\\ T\end{bmatrix}\in\mathbb{R}^{2}:t\geq 2\;\text{mins}\;,T\in[30,70]^{\circ}C\right\} (9)

A key assumption here is that all the samples within a batch are similar in properties such as porosity and initial moisture content.

Refer to caption
Figure 3: Diagram corresponding to the process shown in Fig. 1 in which Γ={DEP (1), SJR (2), US (3), IR (4)}\Gamma=\{\text{DEP (1), SJR (2), US (3), IR (4)}\}. In the sequence shown by the arrows, γ1=1,γ2=3,,γ7=2\gamma_{1}=1,\gamma_{2}=3,...,\gamma_{7}=2 which defines the process configuration ω=(γ1,γ2,,γ7)\omega=(\gamma_{1},\gamma_{2},...,\gamma_{7}). Moreover, ηi(1i7)\eta_{i}(1\leq i\leq 7) determines the control variables of the technology used in ii-th stage.

To determine the cost of a process, we must identify the desired properties of dried food products, such as wet basis moisture content and color. For simplicity, we focus on ensuring that the wet basis moisture content falls within a predetermined range (xd\leq x_{d}) by the end of the process. In this paper, we denote the wet basis moisture content of the food sample at the end of the k-th stage under process configuration ω\omega as xk(ω)x^{(\omega)}_{k}. To account for the cost associated with the final moisture content, the corresponding dynamics must be modeled.

xk(ω)=fγk(xk1(ω),ηk), 1kM,\displaystyle x_{k}^{(\omega)}=f_{\gamma_{k}}(x_{k-1}^{(\omega)},\eta_{k}),\quad\forall\;1\leq k\leq M, (10)

where x0(ω)x_{0}^{(\omega)} denotes the initial wet basis moisture content of the food sample. The semi-empirical drying curves (moisture content versus time) of distillers dried grains (DDG) were derived and evaluated in [5] for T=25CT=25^{\circ}C, T=50CT=50^{\circ}C, and T=70CT=70^{\circ}C. The kinetics of drying for other temperatures can be approximated by interpolating the experimental drying curves in [5]:

xk+1(ω)=eKγk(Tk)tk(Tk,xk(ω),tk)(3.16Mγk(Tk))+Mγk(Tk)1+eKγk(Tk)tk(Tk,xk(ω),tk)(3.16Mγk(Tk))+Mγk(Tk)\displaystyle x_{k+1}^{(\omega)}=\frac{e^{-K_{\gamma_{k}}(T_{k})t_{k}^{*}(T_{k},x_{k}^{(\omega)},t_{k})}(3.16-M_{\gamma_{k}}(T_{k}))+M_{\gamma_{k}}(T_{k})}{1+e^{-K_{\gamma_{k}}(T_{k})t_{k}^{*}(T_{k},x_{k}^{(\omega)},t_{k})}(3.16-M_{\gamma_{k}}(T_{k}))+M_{\gamma_{k}}(T_{k})}

in which

K0(Tk)=\displaystyle K_{0}(T_{k})= (0.074493Tk245.5058Tk+6839.9)/1000\displaystyle(0.074493T_{k}^{2}-45.5058T_{k}+6839.9)/1000 (11)
K1(Tk)=\displaystyle K_{1}(T_{k})= (0.05811Tk2+39.962Tk6680.1)/1000\displaystyle(-0.05811T_{k}^{2}+39.962T_{k}-6680.1)/1000

(TkT_{k} in Kelvin) denote the Lewis model constants [13] for HA and HA/US sub-processes, respectively. Also,

M0(Tk)=\displaystyle M_{0}(T_{k})= (0.2479Tk2172.09Tk+30133)/10000\displaystyle(0.2479T_{k}^{2}-172.09T_{k}+30133)/10000 (12)
M1(Tk)=\displaystyle M_{1}(T_{k})= (0.1468Tk2107.27Tk+19720)/10000\displaystyle(0.1468T_{k}^{2}-107.27T_{k}+19720)/10000

(TkT_{k} in Kelvin) represent the equilibrium dry basis moisture content (ratio of the weight of water to the weight of the solid material) of the DDG products. Moreover, tk(Tk,xk,tk)t_{k}^{*}(T_{k},x_{k},t_{k}) is defined to be:

tk(Tk,xk(ω),tk)=1Kγk(Tk)log(3.16Mγk(Tk)xk(ω)1xk(ω)Mγk(Tk))+tk\displaystyle t_{k}^{*}(T_{k},x_{k}^{(\omega)},t_{k})=\frac{1}{K_{\gamma_{k}}(T_{k})}\log(\frac{3.16-M_{\gamma_{k}}(T_{k})}{\frac{x_{k}^{(\omega)}}{1-x_{k}^{(\omega)}}-M_{\gamma_{k}}(T_{k})})+t_{k} (13)

Therefore, we define the process cost D(ω,η1,,ηM)D(\omega,\eta_{1},\ldots,\eta_{M}) as:

D(ω,η1,,ηM)=k=1Mgγk(ηk)+G(xM(ω),xd),\displaystyle D(\omega,\eta_{1},...,\eta_{M})=\sum_{k=1}^{M}g_{\gamma_{k}}(\eta_{k})+G(x_{M}^{(\omega)},x_{d})\hskip 2.84544pt, (14)

in which gγk:dγkg_{\gamma_{k}}:\mathbb{R}^{d_{\gamma_{k}}}\rightarrow\mathbb{R} is the cost (e.g. energy consumption) of the kk-th sub-process, and G:2G:\mathbb{R}^{2}\rightarrow\mathbb{R} is a function penalizing the violation of the constraint. In batch process drying example, g0(.)g_{0}(.) is the energy consumed for HA drying which can be approximated using the following:

g0(ηk)=g0(tk,Tk)\displaystyle g_{0}(\eta_{k})=g_{0}(t_{k},T_{k})\propto m˙aircp(TkT0)tk\displaystyle\hskip 2.84544pt\dot{m}_{air}c_{p}(T_{k}-T_{0})t_{k} (15)

where m˙air\dot{m}_{air} is the mass flow rate of the inlet air, T0T_{0} is ambient air temperature, AA is the cross-section area of the chamber, VairV_{air} is air velocity, cpc_{p} and ρ\rho are average specific heat capacity and density of air in the temperature operating range of the testbed. Therefore, we can use a weighting coefficient α\alpha to adjust the cost of the HA sub-process.

g0(tk,Tk)=αρairAVaircp(TkT0)tk,\displaystyle g_{0}(t_{k},T_{k})=\hskip 2.84544pt\alpha\rho_{air}AV_{air}c_{p}(T_{k}-T_{0})t_{k}\hskip 2.84544pt, (16)

On the other hand, g1(.)g_{1}(.) is the energy consumed for HA/US sub-processes and can be computed using

g1(ηk)=g1(tk,Tk)=g0(tk,Tk)+PUStk,\displaystyle g_{1}(\eta_{k})=g_{1}(t_{k},T_{k})=g_{0}(t_{k},T_{k})+P_{US}t_{k}\hskip 2.84544pt, (17)

in which PUSP_{US} is the power consumption of the ultrasound transducer. Therefore, the total cost of the process can be written in the following way:

D(ω,η1,,ηM)=\displaystyle D(\omega,\eta_{1},...,\eta_{M})= (18)
k=1M(αm˙aircp(TkT0)+γkPUS)tk+G(xM(ω),xd)\displaystyle\sum_{k=1}^{M}(\alpha\dot{m}_{air}c_{p}(T_{k}-T_{0})+\gamma_{k}P_{US})t_{k}+G(x_{M}^{(\omega)},x_{d})

As a result, we write the corresponding combinatorial optimization problem below:

minω,{ηk}\displaystyle\min_{\omega,\{\eta_{k}\}} D(ω,η1,η2,,ηM)\displaystyle~{}D(\omega,\eta_{1},\eta_{2},...,\eta_{M}) (19)
subject to: ηkU1kM\displaystyle\eta_{k}\in U\quad\forall 1\leq k\leq M

To adapt the above problem to the form of the parameterized path-based optimization problem described in (4), we rewrite it as below:

min{ηk},ν(ω)\displaystyle\min_{\{\eta_{k}\},\nu(\omega)} ωΩν(ω)D(ω,η1,,ηM),\displaystyle\sum_{\omega\in\Omega}\nu(\omega)D(\omega,\eta_{1},\ldots,\eta_{M}), (20)
subject to: ωΩν(ω)=1,ν(ω){0,1}\displaystyle\sum_{\omega\in\Omega}\nu(\omega)=1,~{}~{}\nu(\omega)\in\{0,1\}
ηkU1kM,\displaystyle\eta_{k}\in U\quad\forall 1\leq k\leq M,

III Problem Solution

Combinatorial optimization techniques can be used to solve the optimization problem stated in (20). Intuitively, we can view it as a clustering problem in which the goal is to assign a particular sequence of sub-processes to every food sample with known initial moisture content. In this case, the location of the cluster centers can be thought of as the control parameters associated with that sequence. This work, similar to [12] and [14], utilizes the idea of the Maximum Entropy Principle (MEP) [15][16]. To be able to invoke MEP, we relax the constraint v(ω){0,1}v(\omega)\in\{0,1\} in (20) and let it take any value in [0,1][0,1]. We denote this new weighting parameter by

p(ω)[0,1]ωΩ\displaystyle p(\omega)\in[0,1]\quad\forall\omega\in\Omega (21)

In other words, we allow partial assignment of process configurations to the food sample. Note that this relaxation is only used in the intermediate stages of our proposed approach. The final solution still satisfies p(ω){0,1}p(\omega)\in\{0,1\}. Without loss of generality, we assume that ωΩp(ω)=1\sum_{\omega\in\Omega}p(\omega)=1. Hence, we can rewrite (20) as:

min{ηk},p(ω)\displaystyle\min_{\{\eta_{k}\},p(\omega)} ωΩp(ω)D(ω,η1,,ηM),\displaystyle\sum_{\omega\in\Omega}p(\omega)D(\omega,\eta_{1},\ldots,\eta_{M}), (22)
subject to: ωΩp(ω)=1,p(ω)[0,1],\displaystyle\sum_{\omega\in\Omega}p(\omega)=1,~{}~{}p(\omega)\in[0,1],
ηkU1kM\displaystyle\eta_{k}\in U\quad\forall 1\leq k\leq M

Since the framework we are presenting is built upon MEP, let us briefly review it in the context of this problem. MEP states that given prior information about the process, the most unbiased set of weights is the one that has the maximum Shannon entropy. Assume the information we have about the process is the expected value of the process cost (𝔼(D)=D0\mathbb{E}(D)=D_{0}). Then, according to MEP, the most unbiased weighting parameters solve the optimization problem

maxpωΩp(ω)log(p(ω))subject to:D¯=D0\displaystyle\begin{split}\max_{p}&~{}-\sum_{\omega\in\Omega}p(\omega)\log\left(p(\omega)\right)\\ \text{subject to:}&~{}\bar{D}=D_{0}\end{split} (23)

where D¯\bar{D} is the expected value of the cost DD, namely,

D¯=ωΩp(ω)D(ω,η1,,ηM)\bar{D}=\sum_{\omega\in\Omega}p(\omega)D(\omega,\eta_{1},\ldots,\eta_{M})

The Lagrangian corresponding to (23) is given by the maximization of HβD¯H-\beta\bar{D}, or equivalently, minimization of F=D¯1βHF=\bar{D}-\frac{1}{\beta}H, where β\beta is the Lagrange multiplier. Therefore, the problem reduces to minimizing FF with respect to p(ω)p(\omega) and {ηk}\{\eta_{k}\} such that ωΩp(ω)=1\sum_{\omega\in\Omega}p(\omega)=1. We add the last constraint with the corresponding Lagrange multiplier μ\mu to the objective function FF and rewrite the problem as below:

min{ηk},p(ω)D¯1βH+μ(ωΩp(ω)1)subject to: ηkU1kM\displaystyle\begin{split}\min_{\{\eta_{k}\},p(\omega)}&~{}\bar{D}-\frac{1}{\beta}H+\mu(\sum_{\omega\in\Omega}p(\omega)-1)\\ \text{subject to: }~{}&\eta_{k}\in U\quad\forall 1\leq k\leq M\end{split} (24)

We denote the new objective function in (24) by F¯\bar{F}. Note that F¯\bar{F} is convex in pp and therefore, the optimal weights can be determined by setting F¯p=0\frac{\partial\bar{F}}{\partial p}=0 which gives the Gibbs distribution

p(ω)=exp(βD(ω,η1,,ηM))ωΩexp(βD(ω,η1,,ηM))\displaystyle\begin{split}p^{*}(\omega)=\frac{\exp\left(-\beta D(\omega,\eta_{1},\ldots,\eta_{M})\right)}{\sum_{\omega^{\prime}\in\Omega}\exp\left(-\beta D(\omega\prime,\eta_{1},\ldots,\eta_{M})\right)}\end{split} (25)

Therefore, by plugging (25) into F¯\bar{F}, we obtain its corresponding minimum F¯\bar{F}^{*}.

F¯=minp(ω)F¯=1βlogωΩexp(βD(ω,η1,,ηM))\displaystyle\begin{split}\bar{F}^{*}&~{}=\min_{p(\omega)}\bar{F}=-\frac{1}{\beta}\log\sum_{\omega\in\Omega}\exp\left(-\beta D(\omega,\eta_{1},\ldots,\eta_{M})\right)\end{split} (26)

Subsequently, to determine the optimal process parameters, we minimize F¯\bar{F}^{*} with respect to ηk\eta_{k}s. In other words, solving the constrained optimization problem

min{ηk}F¯subject to: ηkU1kM\displaystyle\begin{split}\min_{\{\eta_{k}\}}&~{}\bar{F}^{*}\\ \text{subject to: }~{}&\eta_{k}\in U\quad\forall 1\leq k\leq M\end{split} (27)

results in finding the optimal control parameters {ηk}\{\eta_{k}^{*}\} for all the sub-processes. Any constrained optimization algorithm can be used to solve (27). As an example, we used the interior point algorithm in our simulations.
The proposed algorithm, thus, consists of iterations with the following two steps:

  1. 1.

    Using parameters {ηk}\{\eta_{k}\} obtained in the previous iteration to find the optimal weights according to (25).

  2. 2.

    Solving the constrained optimization in (27) to find the optimal parameters {ηk}\{\eta_{k}^{*}\} using {ηk}\{\eta_{k}\} as the initial guess for the algorithm.

Refer to caption
Figure 4: Figures (a), (b), (c), and (d) represent the solution obtained using our proposed algorithm for M=2M=2, M=3M=3, M=4M=4, and M=5M=5 accordingly, where MM indicates the maximum number of stages allowed. In all simulations, α\alpha is chosen to be 0.5. Figure (e) compares the temperature profile of the solution for M=6M=6 and optimal single-stage HA and single-stage HA/US processes. The results show a 12.09% energy consumption reduction compared to the single-stage HA/US process and a 63.19% improvement compared to the single-stage pure HA process.
Refer to caption
Figure 5: Figures (a), (b), (c), (d), and (e) represent the solution of Algorithm 1 for M=5M=5 and α=0.2\alpha=0.2, α=0.1\alpha=0.1, α=115\alpha=\frac{1}{15}, α=0.05\alpha=0.05 and α=0.04\alpha=0.04, respectively. Here, α\alpha denotes the relative cost weight of the pure HA process.

In both steps, the value of F¯\bar{F} is reduced. Therefore, the algorithm converges. Furthermore, we can adjust the relative weight of the entropy term H-H to the average cost D¯\bar{D} using the Lagrange multiplier β\beta. For β0\beta\rightarrow 0, maximizing the entropy term dominates minimizing the expected cost. In this case, the optimal weights derived in (25) would be equal for all the valid process configurations. On the other hand, when β\beta\rightarrow\infty, more importance is given to D¯\bar{D}. In other words, for very large values of β\beta, we have:

limβp(ω)={1ifω=argminωΩD(ω,η1,,ηM)0otherwise\displaystyle\begin{split}\@add@centering\lim_{\beta\rightarrow\infty}p(\omega)=\begin{cases}1\quad\text{if}~{}\omega=\underset{\omega^{\prime}\in\Omega}{\mathrm{\text{argmin}}}&D(\omega^{\prime},\eta_{1},\ldots,\eta_{M})\\ 0\quad\text{otherwise}\end{cases}\centering\end{split} (28)

The idea behind the algorithm is to start with β\beta values close to zero, where the objective function F¯\bar{F} is convex and the global minimum can be found. Then, we keep track of this global minimum by gradually increasing β\beta until maxωΩp(ω)1\max_{\omega\in\Omega}p(\omega)\rightarrow 1. This procedure helps us avoid poor local minima. The proposed algorithm is shown in Algorithm 1.

Algorithm 1 Combinatorial Optimization using MEP
  Initialize: ηkU\eta_{k}\in U, β=βmin\beta=\beta_{min}, ϵ1\epsilon\approx 1^{-}, ζ1\zeta\geq 1
  Compute p(ω)p(\omega) for all ωΩ\omega\in\Omega using (25).
  while maxωp(ω)1ϵ\max_{\omega}p(\omega)\leq 1-\epsilon do
     Find optimal {ηk}\{\eta_{k}^{*}\} using any constrained optimization algorithm to solve (27).
     Update {ηk}{ηk}1kM\{\eta_{k}\}\leftarrow\{\eta_{k}^{*}\}\quad\forall 1\leq k\leq M.
     Update p(ω)p(\omega) using (25) for all ωΩ\omega\in\Omega.
     Set βζβ\beta\leftarrow\zeta\beta.
  end while

IV Simulations and Results

In this section, we simulate our proposed algorithm for drying DDG products using multiple sub-processes and compare it to the commonly-used single-stage drying process. By changing the number of sub-processes allowed (MM), we investigate how additional sub-processes affect efficiency. Moreover, we can also assign weights to the energy consumed by different sub-processes to include their additional costs (e.g., maintenance), using the coefficient α\alpha defined in (16) in our problem formulation.

Effect of the permissible number of stages (MM): In simulations shown in Fig. 4, we have considered drying fresh DDG products with roughly 75.5%75.5\% initial wet basis moisture content to around 7.5%7.5\% at the end of the process. We have used our proposed algorithm for α=0.5\alpha=0.5 with different numbers of allowable sub-processes from two to six (Fig. 6). The results show 11.96% (M=2M=2), 12.07% (M=3M=3), and 12.09% ( M=4M=4, M=5M=5, and M=6M=6) improvement in energy consumption compared to the most efficient single-stage HA/US drying process. In addition, it reduced the energy consumption by 63.13% (M=2M=2), 63.18% (M=3M=3), and 63.19% (M=4M=4, M=5M=5, and M=6M=6), in comparison with the optimal pure HA process.

As shown in Fig. 6, the cost of the solution given by Algorithm 1 decreases as the number of allowable sub-processes increases from two to four. However, for M4M\geq 4, increasing MM does not impact energy consumption reduction.

Refer to caption
Figure 6: Energy consumption of the algorithm solution for different numbers of allowable stages.

In general, by increasing MM, the cost of the process either decreases or remains constant. The reason is that the space of all valid process configurations with MM stages is a subspace of process configurations with N>MN>M stages. Consequently, we can choose MM in such a way that further increasing it does not significantly affect the cost value. In this case, as an example, M=4M=4 can be chosen as the optimal number of stages.

Effect of the relative weight of HA process (α\alpha): With α=0.5\alpha=0.5, the HA/US process is significantly more efficient than the pure HA process, resulting in the latter being absent from the optimal solutions in Fig. 4. As α\alpha is reduced, Fig. 5 shows that the pure HA process becomes a part of the optimal solution, with optimal configurations consisting only of HA/US sub-processes for α=0.2\alpha=0.2, α=0.1\alpha=0.1, and α=115\alpha=\frac{1}{15}. For α=0.05\alpha=0.05, the pure HA sub-process is used in one stage, and for α=0.04\alpha=0.04, the optimal solution is entirely a pure HA process with a constant temperature profile. The solutions (a)-(d) in Fig. 5 achieve energy consumption reductions of 9.95%, 5.75%, 3.62%, and 2.08% compared to the most efficient single-stage processes.

V Conclusion and Future Work

In this paper, we introduced a class of combinatorial optimization problems prevalent in industrial processes involving sub-processes with similar objectives. We focused on industrial drying, examining continuous and batch processes, and applied our proposed algorithm to a batch process drying prototype that allowed for both HA and HA/US drying. Our study demonstrated the benefits of simultaneous optimization of the process configuration and control parameters, as opposed to treating them as separate problems.

Although our example was limited to two permitted technologies, our framework can be extended to accommodate any number of technologies |Γk||\Gamma_{k}|\in\mathbb{N} for all 1kM1\leq k\leq M. Additionally, our algorithm can be modified to include more control parameters and quality constraints. In future work, we plan to include air velocity, ultrasound power, and duty cycle as control variables, and quantitative color as a constraint representing desired features.

The methodology we presented yielded a combinatorially large space of decision variables, with a complexity of O(k=1M(Nk))O(\sum_{k=1}^{M}{N\choose k}). To reduce this complexity, we plan to employ the Principle of Optimality in our future work. This principle states that the next technology and its operating conditions are determined only by the current state, independent of prior sub-processes, along an optimal sequence of sub-processes. Successfully utilizing this fact will increase the scalability of our proposed algorithm. Additionally, the algorithm can be adjusted to incorporate new constraints specific to the technologies and setup used.

References

  • [1] “Barriers to industrial energy efficiency,” US Department of Energy, Tech. Rep., June 2015.
  • [2] “Quadrennial technology review,” US Department of Energy, Tech. Rep., September 2015.
  • [3] N. nan An, W. hong Sun, B. zheng Li, Y. Wang, N. Shang, W. qiao Lv, D. Li, and L. jun Wang, “Effect of different drying techniques on drying kinetics, nutritional components, antioxidant capacity, physical properties and microstructure of edamame,” Food Chemistry, vol. 373, p. 131412, 2022.
  • [4] M. Yang and J. Yagoobi, “Enhancement of drying rate of moist porous media with dielectrophoresis mechanism,” Drying Technology, vol. 0, no. 0, pp. 1–12, 2021.
  • [5] A. Malvandi, D. Nicole Coleman, J. J. Loor, and H. Feng, “A novel sub-pilot-scale direct-contact ultrasonic dehydration technology for sustainable production of distillers dried grains (DDG),” Ultrasonics Sonochemistry, vol. 85, p. 105982, 2022.
  • [6] O. Kahraman, A. Malvandi, L. Vargas, and H. Feng, “Drying characteristics and quality attributes of apple slices dried by a non-thermal ultrasonic contact drying method,” Ultrasonics Sonochemistry, vol. 73, p. 105510, 2021.
  • [7] M. Farzad and J. Yagoobi, “Drying of moist cookie doughs with innovative slot jet reattachment nozzle,” Drying Technology, vol. 39, no. 2, pp. 268–278, 2021.
  • [8] D. Huang, P. Yang, X. Tang, L. Luo, and B. Sunden, “Application of infrared radiation in the drying of food products,” Trends in Food Science & Technology, vol. 110, pp. 765–777, 2021.
  • [9] Experimental Study of Heat Transfer Characteristics of Drying Process with Dielectrophoresis Mechanism, ser. ASME International Mechanical Engineering Congress and Exposition, vol. Volume 11: Heat Transfer and Thermal Engineering, 11 2021.
  • [10] Z. Erbay and F. Icier, “Optimization of hot air drying of olive leaves using response surface methodology,” Journal of Food Engineering, vol. 91, no. 4, pp. 533–541, 2009.
  • [11] H. Majdi, J. Esfahani, and M. Mohebbi, “Optimization of convective drying by response surface methodology,” Computers and Electronics in Agriculture, vol. 156, pp. 574–584, 2019.
  • [12] N. V. Kale and S. M. Salapaka, “Maximum entropy principle-based algorithm for simultaneous resource location and multihop routing in multiagent networks,” IEEE Transactions on Mobile Computing, vol. 11, no. 4, pp. 591–602, 2012.
  • [13] W. K. Lewis, “The rate of drying of solid materials.” Journal of Industrial & Engineering Chemistry, vol. 13, no. 5, pp. 427–432, 1921.
  • [14] A. Srivastava and S. M. Salapaka, “Simultaneous facility location and path optimization in static and dynamic networks,” IEEE Transactions on Control of Network Systems, vol. 7, no. 4, pp. 1700–1711, 2020.
  • [15] E. T. Jaynes, “Information theory and statistical mechanics,” Physical review, vol. 106, no. 4, p. 620, 1957.
  • [16] ——, Probability theory: The logic of science.   Cambridge university press, 2003.