Читать книгу Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications - Группа авторов - Страница 51
2.5.2 Resource Allocation Using Deep Learning
ОглавлениеAllocation of resources optimally to different edge system defines resource allocation. Deep learning uses various learning methods to allocate resources. In this section, deep learning, i.e., the Deep Reinforcement Learning (DRL) allocation method, is discussed. An edge network of green mechanism resource allocation is proposed to satisfy mobile users of their requirements. The “green mechanism” implies increasing the energy efficiency in a system [33].
Table 2.1 illustrates how the methods for allocating resources efficiently with challenges. A DRL method is applied to the edge to overcome this challenge by taking user and base station as requirements. The DRL helps reduce power and bandwidth from the base station to the user, thus making the system energy efficient [33].
The main aim of the DRL method is to provide energy efficiency and a better user experience. Another advantage of DRL is that it has the capability not to exceed the space of the base station. Convex optimization method is first derived to obtain minimum transmission energy and iterate with DQN. It also reduces the space state of the network. On the basis of convex optimization results, optimal connection and optimal power distribution are found. Agent and external environment are the two states of DRL. By taking different actions, the external environment state is achieved. The external environment receives a reward. The main purpose remains as to maximize the value of the reward. In the experimental analysis, several users with three base stations are considered. The number of users for each convergence step is considered. It is seen that as the number of users increased, DRL required more steps for convergence; thus, convergence speed tends to slow down and the efficiency has also increased [33].
Table 2.1 Existing studies using deep learning in edge.
S. no. | Existing methods | Inference |
---|---|---|
1. | Joint task allocation and Resource allocation with multi-user Wi-Fi. | To minimize the energy consumption at the mobile terminal, a Q-learning algorithm is proposed. In this method, energy efficiency is not considered, which leads to additional costs for the system. |
2. | Joint task allocation-Decoupling bandwidth configuration and content source selection. | An algorithm was proposed for avoiding frequent information exchange, which was proven to be less versatile and hence cannot be used in large applications. |
3. | Fog computing method for mobile traffic growth and better user experience. | As users are located in different geographical places, implementing fog becomes challenging and requires high maintenance and increased costs. |
4. | Deterministic mission arrival scenario | After successfully completing the present mission, each mission is completed, which cannot work as the data source generates tasks continuously, which cannot be handled by the deterministic method. |
5. | Random task arrival model | This method works on task arrived and not on the queue tasks, which fails the system to work efficiently. |