With the increasing demand of higher clock speed, the number of the cores in a single die is increased in the last decade. According to, Karavadara et al. (2017) with the increased clock speed there are some issues were addressed that lead to the degradation of the performance of the system in the single core processor systems. One of this issues were memory management in this scenario, some of the algorithms that are used are, cache partitioning algorithm.
The following report contributes to the discussion about the different memory management algorithms that are used in the single and multicore processors in order to efficiently manage the allocation of memory and execution of processes.
A multi core system within a computer has a multiple central Processing Unit or CPU that are unified into a single core. These are also entirely independent of each other and the course perform the computing tasks that it luminary in nature for running a program or managing data or executive instruction (Imtiaz, Hameed and Min-Allah 2010). in the recent years information communication technology or ICT has evolved in an extremity and does the computer chip manufacturers has been targeting the higher clock speeds constantly and trying to implement multi core processor having two cores, mainly the dual core CPU and the eight core with IBM power 7 series (Qureshi and Patt 2006). Multi core processors have been ideal for its utility in service because they have the ability to boost the number of users who are able to share the server resources in a simultaneous way. On the other hand the way by which memory management is made in a multi core processor is handled by numerous algorithms in use. Following has been a literature review based upon the different algorithms used for the memory allocation system in multi core operating systems all extracted from peer reviewed journals. These journals have been selected due to the fact that they were all the information about the data and algorithm utilised for helping the management of memory allocation in a feasible way for the multi core processors.
The first people and review presents scheme that has the ability to manage heap data within a local memory that has its presence within each code of unlimited local memory or LLM multi core processor. The journal article clearly presents how it feasibly manages heap data using software cache in a process which is semi automatic in nature (Karavadara et al. 2017). The management of the heap data offer code also requires changing of the software cache through changes made in the codes of different threads. The journal article focuses on the cross state modifications that occur in the algorithm which are difficult to code as well as equally difficult to debug. This only can become much more difficult when the number of cores of the processes increases. The paper proposes a semi-automatic as well as a scalable scheme for the management of heap data which has the ability to hide the complexity in a library utilising a programming interface which is natural. In addition to that the embedded application in which the maximum heap size is achievable at the time of compilation the people processes a proposal for the Optimisation on the heap management with significantly improve the performance of the application (Khatoon & Mirza 2015). The experiments for the checking of the algorithm has been made on several Benchmarks regarding the MiBench which executes for the on the Sony PlayStation 3. This make sure of the implementation regarding the heap data management algorithm for multiple processes which makes sure that the scheme proposed in the paper is easy for use and if the size of heap data at a maximum level can be known then the Optimisation process of the entire application can be improved on a performance level by the average of 14 per cent.
In another paper the author Qureshi and Patt (2016), suggests that the Technologies which have been emerging has now using an on chip memory which is non-volatile in nature known as the NVRAM. This is lead into a shift in paradigm for the computer architecture. With the utility of this new on chip memory the non-volatile memory is previously in news like the PRAM and the viable DRAM could have been replaced with acquiring competitive speed at a much lower consumption of power (Qureshi and Patt 2006). This mitigation of overuse of extra energy and memory we’ll have given rise to proposal of a new architecture within the hierarchical and hybrid main memory for a multi core system known as MN-MATE. This is known as M1 Memory which is the replacement of the conventional DRAM memory. The entire design and the evaluation of the management of the techniques in an effective way has been achieved and the algorithm has given rise to a high performance and comparatively lower energy has been used. Even in the hierarchical memory management the hybrid memory management and file caching has been created efficiently with the utility of this algorithm. Further in the paper it has been revealed with the matching as a service application that the algorithm proposed in this paper has the ability to improve the performance and reduce usage of energy.
As per the review of another peered reviewed journal article it can be sent that dynamic memory management has been one of the most ubiquitous and expensive operations for the utility of C and C++ applications. This paper understands that multiprocessor acts like a mainstream architecture which is much more important for investigation regarding in memory management and exploitation of the multi core parallelism. the people proposes away by which the exploiting of the multi core parallelism in the dynamic memory management of sequential applications can work for deallocating functions which can be done by spinning of memory allocation. This paper focuses on the aim of the study which would put forward an efficient design and implementation of the memory management thread or MMT such that there would be no need if the memory management library underline that will require the library to be offloaded (Majo and Gross 2011). The paper proposes the idea that using the allocation intensive Benchmarks in heap memory management MMT approach can achieve average speed up ratio.
According to the contrary presented by another article the multi core processor has been represented as going through an evolutionary change while setting up a new goal for the high performance computing or HPC. However this algorithm proposed by the authors in the journal article put forward that parallelism is a feature of multi core processors which is pre-existing. therefore the people presents a description of the progression of the industry and evolution of some of the challenges that are being faced by the multi core processors and who proposed solutions that have been identified as a counter attack to the challenges faced by multi core processors.
With the challenges with multi core processor has there also is a new technology of tiled multi-core architecture that has become extremely popular in the latest era. The memory management approach goes to critical challenges during the programming of several tiled multi-core architecture available at the use of efficient resources. The hierarchical memory management for a load balancing stream processing middleware helps to port LPEL, which is a dynamic load balancing middleware utilise for the upstream processing to a single chip cloud computer or SCC. The author of the journal article proposes that tiled architectures and this algorithm have much in similarity which works much better than an MPI-based implementation.
A multi core processor has the ability to possess processor memory bandwidth which forms a bottleneck as a result of the implementation of numerous processor cores sharing the same memory interface or bus or processor (Blagodurov et al. 2010). The on chip memory hierarchy in multi core processor has thus made itself an important resource which should have efficient management so that the above problem does not occur. With conducting a survey on all the proposed techniques in the recent publications it can be presented that the effectiveness of the techniques utilised in the recent trip multiprocessors gained the most appropriate assessment Technology. In addition to that, there is win an identification of the cache Optimisation techniques which has been identified for the utility in single core processors however it has not been implemented in multi-core processors. Due to this the effectiveness of cache optimisation cannot be examine further within multi core processors.
Along with all the journal articles reviewed and algorithms utilised for memory management in multi core processors have been identified there is also a journal article devised on the idea of CMCP, which forms a page replacement policy for the system level hierarchical memory management working on multi core processors (Gerofi et al. 2013). The state of the art page replacement policies like the LIC policy do not understand the requirement of masses multicore processing since they always have been a leukocyte of buffer in validations. The buffers fail to collect the statistics for page usage. However the implementation of the experimental 64kB page has been able to support the Xeon phi, which is able to reveal and evaluate the proposal algorithm on various applications (Gerofi et al. 2014).
Conclusion
Thus, it can be concluded from the above review of journals on algorithms based on memory management in multi core processors that all the algorithms used in different methods have clearly one goal of managing the extensive amount of data generated due to the numerous calls present within the processors. The journal article follows the structure of an ideal peer reviewed journal but Falls short in describing the main features of how they can be utilised with the help of different development methods physically within the organisation.
References
Blagodurov, S., Zhuravlev, S., Fedorova, A. and Kamali, A., 2010, September. A case for NUMA-aware contention management on multicore systems. In Proceedings of the 19th international conference on Parallel architectures and compilation techniques (pp. 557-558). ACM.
Gerofi, B., Shimada, A., Hori, A. and Ishikawa, Y., 2013, May. Partially separated page tables for efficient operating system assisted hierarchical memory management on heterogeneous architectures. In Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on (pp. 360-368). IEEE.
Gerofi, B., Shimada, A., Hori, A., Masamichi, T. and Ishikawa, Y., 2014, June. CMCP: a novel page replacement policy for system level hierarchical memory management on many-cores. In Proceedings of the 23rd international symposium on High-performance parallel and distributed computing (pp. 73-84). ACM.
Imtiaz, S.Y., Hameed, A. and Min-Allah, N., 2010, Multi-core Technology: An overview. In We are pleased to present the Workshop Proceedings of the 32nd Annual Con-ference on Artificial Intelligence (KI 2009), which is held on September 15-18 in Paderborn. This year the volume includes papers or abstracts of ten workshops: 3rd Workshop on Behavior Monitoring and Interpretation-Well Being, Complex (p. 126).
Karavadara, N., Zolda, M., Nguyen, V.T.N. and Kirner, R., 2017, A Hierarchical Memory Management for a Load-Balancing Stream Processing Middleware on Tiled Architectures?.
Khatoon, H., & Mirza, S. H., 2015, Cache Optimization Techniques for Multi core Processors.
Liu, L., Cui, Z., Xing, M., Bao, Y., Chen, M. and Wu, C., 2012, September. A software memory partition approach for eliminating bank-level interference in multicore systems. In Proceedings of the 21st international conference on Parallel architectures and compilation techniques (pp. 367-376). ACM.
Majo, Z. and Gross, T.R., 2011, June. Memory management in NUMA multicore systems: trapped between cache contention and interconnect overhead. In AcmSigplan Notices (Vol. 46, No. 11, pp. 11-20). ACM.
Qureshi, M.K. and Patt, Y.N., 2006, December. Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches. In Microarchitecture, 2006. MICRO-39. 39th Annual IEEE/ACM International Symposium on (pp. 423-432). IEEE.
Zlateski, A., Lee, K. and Seung, H.S., 2016, May. ZNN–A Fast and Scalable Algorithm for Training 3D Convolutional Networks on Multi-core and Many-Core Shared Memory Machines. In Parallel and Distributed Processing Symposium, 2016 IEEE International (pp. 801-811). IEEE
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download