|
Software Development, Integration, Testing, Runtime, and Training Services for Dissertation and Thesis Projects Keywords applicable to this article: software development. software training, dissertation, thesis, core Java development, core Java training, Python development, Python training, C++ development, C++ training, software framework development, software framework training, full stack development, full stack training, machine learning development, machine learning training, artificial intelligence development, artificial intelligence training, finite elements analysis development, finite elements analysis training By: Sourabh Kishore, Chief Consulting Officer Dissertation and thesis projects investigating the technical side of a research field are increasingly requiring active modeling and algorithm development, which may require validation through software development, testing, and demonstration of the technical aspects investigated such as system architecture, algorithm design, software integration design, critical operational analysis, functional performance and behaviours, operational visualisation, visualisation of different challenges, data visualisation, data analytics, model development and analytics, and many such research activities commensurate with the research objectives and questions. The typical research fields requiring software development in dissertation and thesis projects are Industry 4.0 and Industry 5.0 designs for industrial engineering, logistics, and supply chain management, and process engineering (engineering and technical studies), machine learning and artificial intelligence studies (engineering and technical studies), software architecture studies, software framework studies, industrial logistics and supply chain sustainability studies (again engineering and technical studies), data modeling studies, data visualisation studies, data science studies, data structure and database design studies, medical and healthcare system design studies, computing security and intrusion detection / prevention studies, microservices architecture, services-oriented architecture, and cloud computing studies, software defined networking studies, edge computing studies, blockchain design and operations studies, product design and development studies, eCommerce design studies, robotics studies, vehicles automation studies, digital twin studies, additive manufacturing studies, augmented reality studies, and several other areas requiring active software development, testing, and demonstration. If you are undergoing any such study requiring active software development and testing using Python, Java, C++, JavaScript, Kotlin, Go, and software frameworks (such as Spring Boot, Angular, Express, Hyperledger, Corda, and D3), our services could be of interest to you for developing the full software suite for your project (including the open source databases like MySQL and PostgreSQL), testing the suite on your Ubuntu or Windows laptop, and training you thoroughly on the codes, their integration, and their operations. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. In addition, we can suggest research topics and proposals for dissertation and thesis research projects using software development method. Please visit our page on topic proposal development for more details. Following are more details about our services: (a) Problem Description, Research Context and Topic Development, and Defining Software Requirement Specifications : Dissertation and thesis research studies with strong technical orientation comprising low-level architecturee and design details may require software development as the primary research method. Software development in a dissertation and thesis research project is different from commercial projects because its software requirement specifications should be credible enough to meet the aims and objectives of the research and should tangibly demonstrate partial or full solution to the technical or business problem description and gaps with novelty. Hence, every researcher needs to design the research context and research topic very carefully in the quest for meeting the research aims and objectives and in this process learning by practicing coding, integration, testing, and runtiming in a professional application development environment. We shall help you in designing your project through our research topic development service such that you can propose and achieve approval of your research proposal. We shall create the runtime environment description and related software requirement specifications as a part of this delivery. The environment and software requirement specifications will be defined in such a way that it fits into your limited resources (such as, one laptop with Intel i3 processor, 4GB RAM, and 128GB HDD running Ubuntu 20 and above) and fulfill the research aims, research objectives, and research questions with justified value addition to the problem identified, and with novelty. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (b) Software Development : Dissertation and thesis research studies with well-defined technical or business description of problems and gaps and the related enquiry guided by research aims, research objectives, and research questions may require coding of a prototype application from scratch, or testing augmented coding of new capabilities in an existing open source framework, or integrating different code modules to enable certain capabilities with targeted functionalities, or writing interface codes for connecting to a cloud-based application through its application programming interfaces, or modifying the coding structure of an existing source code to run it on cloud-based platforms (Virtual Machines, Docker Swarm, or Kubernetes), or writing codes for machine learning algorithms to integrate with the analytics capabilities of an existing open source software / application framework, or writing artificial intelligence codes for integrating multiple machine learning code modules to enable an automation framework, or testing the runtimes of an existing coded software framework with novel environmental variables and their settings or any other research objective required to be delivered through software development. The coding environment may be setup in a laptop running Ubuntu or Windows, and the runtime environment may be implemented on a single laptop, multiple laptops interconnected through Wi-Fi router, or on a free cloud account on AWS or Google Cloud, or on servers allowed by universities and institutions for free hosting. We offer you services to develop entirely new code modules, or append new codes into existing code modules or coding frameworks. Our coding choices are in Python, Java, C++, C#, JavaScript, Kotlin, and Go programs and several software frameworks, such as Apache ActiveMQ, Oracle JDK 8 and all later versions, Spring Boot, NodeJS, AngularJS, ExpressJS, Hyperledger, Corda, Eclipse Ditto / Basyx Digital Twin with AAS modeling, AAS web client, and with OPC UA interfacing, D3 for data visualisation, and Elmer FEM supported by 3D modeling in Blender3D for Finite Elements Analysis). Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (c) Software Integration, Testing, and Runtime : These are the most challenging aspects in dissertation and thesis research projects requiring software development as the primary research method. The resources available with researchers are normally very limited, whether in the form of personal laptops / computers or in the form of free cloud computing resources. The challenge is to manage the complete primary research code runtimes and all supporting resources within those limits to generate credible validation of the outcomes. Our role is not only to develop the codes for creating codes for new modules or to append new codes in existing modules albeit is also to integrate the codes and run them within limited resources. We have been successful in running codes in native runtimes of the programs (like .JAR runtime for Java or .exe for C++ in Windows or ./programname.C for C++ in Linux), Docker swarm, Kubernetes, and an API gateway like Kong or Apache ActiveMQ with Postman as the API client with multiple parallel sessions in a single laptop having moderate configurations like Intel i5 seventh generation CPU, 8GB DDR4 RAM, and 512 GB SATA disk drive running Ubuntu 20 or Windows 10 (not 11!!) by making several fine tuning configurations such that the entire system can be demosntrated without any speed or hanging issues. For example, we decide on the maximum number of terminals to be opened, the sequence of running them, and the amount of data to be imported into the database after testing multiple combinations. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (d) Software Training and Knowledge Transfer : Normally, the knowledge transfer about the components, their installation, and the runtime is part of our development, integration, and testing scope. However, at additional humble fee we offer to learn the programming basics relevant to your project, the entire development process including the coding process, the framework modules used, the imports used, and interpretations of all the lines of codes used in the project. This additional knowledge may not be needed for your research defense but will be very useful when you want to position your dissertation / thesis software development, testing, and runtime project as an experiential component in your curriculum vitae when applying for a job. Your knowledge of all the fundamentals related to your project can help you in performing impressively in interviews and securing an employment. It is always good to learn deeply from your own project opportunity through software development, integration, and testing as the primary research of your dissertation / thesis research. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (e) Data usage or generation for your project: In scientific research studies, the input data can be obtained from experiments conducted in laboratories, from existing databases available publicly or on request, or from simulation outcomes. In in-depth low level technical studies, the data structure is the primary foundation on which, the software architecture and coding is based. We can generate data for you using all the three methods. Whatever be the data source, the database design will be carried out as per the objectives of your research. The data may be manipulated and reorganised to fit into the variables defined in the study for justifying the outcomes as per the research objectives. The data may be generated during the experimentation, such as manual entries made through Postman API connections using JSON files pushed at every attempt. In experimentations, the algorithms can be tested by feeding targeted data reflecting hundreds of practical scenarios. Deliberate breaches may be programmed to visualise the automated detection and risk logs generated by the manually designed rules engine or by artificial intelligence. In some research studies, data generation during experimentation may require an already existing foundation data. For example, to test smart contracts rules in blockchain codes or to test intrusion detection rules in an intrusion detection system, some existing foundation data will be needed before generating own data in experimentation. Through our experience, we have compiled our own databases of already completed experiments and from Internet-based data sources, which we can use for your project. There is no plagiarism or intellectual property issues in using existing databases for testing new software designs as long as they are available publicly or on permission for academic reuse and are cited in the final report. The third approach is to generate data using a simulation tool before it can be used to test a software program. There are not many options of simulation tools capable of generating loads of data for testing a software program. Hence, this feature is used only when the other two data sources are either not feasible or not that attractive. We have used OPNET and VENSIM for generating usable data for software testing. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (f) Project cost: The cost of our efforts will depend upon the size of the project. We assure you very reasonable and affordable rates. The payments are generally requested in advance. However, we can negotiate on delivery-linked part payments as advances by breaking the main project into several sequential deliveries. At our final payment, we shall integrate all these deliveries to complete the final product and its runtime. Payment to us will include our services only. The cost of client-owned laptop (of desired configuration), Internet, cloud computing account, or any paid software (if required) shall be on your account. After delivery, we shall be available for any clarifications and support for as long as you want. We have supported clients free of cost who have come back to us even after an year or two. No fees is required for testing and runtime support as many times as you need. New fees shall be requested only if you ask for additional development of codes or for adding new modules, components, and capabilities. We can also evolve the critical discussion, conclusions, and generalisations based on our analysis and present to you our opinion in the form of a write up at additional fees. You may however like to confirm yourself if our opinion justifies your research aims and objectives. We will take accountability of the accuracy of all analytics and the conclusions drawn but the final success will depend upon your own understanding, interpretations, analysis, and overall knowledge gained used in your defense regarding whether your research aims and objectives have been met or not. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. (g) Project value: Our services shall offer you an excellent opportunity of learning through your own project design, which always results in better knowledge than merely reading the books on software coding. Your projects shall comprise of several modules interconnected to work as a real system in a single or multi laptop environment. You will go through the stages of unit coding, components coding, integrating the components through API coding, functional integration and testing, system testing, and runtime testing. You will learn the art and science of making coding work for a real production project and also will learn the art and science of diagnostics, troubleshooting, and error management in real world projects. Your experience and our training imparted to you on how your project was conceptualised and designed, how and why its individual components and their modules were chosen, how the units and components were coded, and how were they integrated, tested, and runtimed will ensure your exposure to the full software development life cycle. This experience will not only help you in defending your project but will also help you in performing well in your job interviews after you complete your studies. In the process of training, we offer two modes of learning: knowledge transfer related to your project outputs (which will be free of cost), and training on all the codes thought and written from scratch to reach the point when your project was completed successfully (at a humble additional fee). In the second option, we will make you an expert on the modules and packages used for your project. Normally, software learning is a linear process requiring you to dive into an ocean of knowledge but come back to surface with very less and often highly confusing and disconnected knowledge elements. Learning a software through textbook knowledge often results in several theoretical, disintegrated, and confusing concepts. The examples given in textbook training are mostly out of the context from the real world software development. You may be able to explain the concepts but will never be able to create a product of your own. Unfortunately, almost all the commercial software training programmes are linear textbook driven. They may take several days to teach you concepts theoretically, which you can learn in merely a few hours through hands-on practice. To create a product you need specialised training on mapping software modules, components, and packages with business requirement specifications. This skill requires learning through project experience. Each project may have its unique design considerations. We can deliver in this regard because we have worked on (and continue to work on) several highly complex production applications. You may select and learn only the modules, components, and packages related to your project following requirements-based learning approach instead of linear learning approach. You can always repeat this experience for a new project offered to you. Simply stated, you will know clearly what you need, and where to find the knowledge you need from the ocean of software knowledge and how to apply it in your project to fulfil the business requirement specifications. This is exactly the skill in demand that the companies want when they hire you for their projects. They do not seek a coding wizard who has never worked on projects. They seek individuals who have worked on a few projects and have produced promising and reliable results. This is where our service of software development, integration, testing, runtime, and training for your dissertation and thesis research projects shall be useful for you. Please write to us on consulting@etcoindia.co.in or consulting@etcoindia.net.in for discussion. Please contact us at consulting@etcoindia.co.in or consulting@etcoindia.net.in to discuss your software project requirements. Further, We also offer you to develop the "problem description and statement", "aim, objectives, research questions", "design of methodology and methods", and "15 to 25 most relevant citations per topic" for three topics of your choice of research areas at a nominal fee. Such a synopsis shall help you in focussing, critically thinking, discussing with your reviewers, and developing your research proposal. To avail this service, Please Click Here for more details. (h) Details of selected project scenarios of the completed projects: only generic details are provided because of client confidentiality. Parameters of critical control points of manufacturing assets in an Industry 4.0 production system monitored through a machine-learning-based risk assessment system: This scenario was used for several projects with different industrial application scenarios having critical parameters and their safe operating ranges studied from relevant literature. Several Java files emulated as MQTT clients were created to feed data about the parameters under monitoring. Apache ActiveMQ was used to consolidate the data and feed to a machine learrning code runtime written in Java. The machine learning code was written to predict the future values of the parameters based on learning from past results. A Java rules engine was created that compared the future predicted values with the actual values arriving and logged risks at multiple levels each having a different operating level decision-making. Typically, risks can be categorised at five or seven levels. Operating alerts of parameters related to an operations area: This scenario was used for four research projects studying: A Warehouse, A Virtualised Data Centre, A Fulfilment Centre, and A Construction Storage Area. Operating parameters and their pre-defined operating ranges taken from actual operating personnel were defined. Imagining that these parameters can be sensed using IIoTs, multiple instances of Postman API client application were used to feed data using JSON files configured as per the parameters. The JSON files were fed to a Spring Boot controller file through a local server port meta-annotation (localhost:portnumber) using embedded Tomcat Server. Spring Boot Hibernate coding was used to store the data in PostgreSQL database. A complex Java rules engine was designed to read the parameter files and recommend operating level decisions, such as increase value by 10%, reduce value by 20%, initiate critical shutdown, etc. In one of the projects, a machine learning code was written to predict the future values of the parameters based on learning from past results. The Java rules engine in this project was created to compare the future predicted values with the actual values arriving and recording predictive recommendations based on the operating boundaries of the parameters. Without machine learning, the system can help in real time monitoring and control. With machine learning, the system can help in predictive and prescriptive monitoring and control. Anomaly detection in large data sets using clustering machine learning algorithms: This scenario is very popular in academic studies for dissertation and thesis research projects. This scenario has been used in several Industry 4.0 research projects by us depending upon the size and nature of data, such as intrusion detection in IT networks of supply chains, detection of fraud by insider traders, detection of data proliferation attackers, detection of industrial process anomalies, predictive detection of machine malfunctions, provenance data breach detection in Industrial IIoT networks or smart contracts in industrial blockchains, and detection of ongoing bullwhip effect in supply chain networks. This scenario can be executed in Python or Java. The clustering machine learning algorithms of interest are: K-means, Local Outlier Factor, DBSCAN, Affinity Propagation clustering algorithm, Agglomerative Hierarchy clustering algorithm, Gaussian Mixture Model, Balance Iterative Reducing and Clustering using Hierarchies, and Agglomerative Clustering. The packages used were Panda (for Numpy and Scipy), sklearn, and matplotlib. The projects involved both internal validity (Silhouette Score and DBIndex Score) and external validity analysis (Normalized Mutual Information and Adjusted Random Score). In addition, Apache Spark MLlib was used in one project for anomaly detection in streaming data. Data visualisation in big data projects: This scenario was used for two research projects studying: food and beverages supply chain and weather related supply chain disruptions. In future, this scenario has tremendous potential as a highly credible and empirically acceptable primary research method. We used D3 framework for these two projects, which comprises of hundreds of data visualisation templates in both two and three dimensions. The templates guaranteeing maximum story telling from the big data set used for a project should be selected. In our two projects, we used Multi-Series Index and Line Charts. They are dynamic charts capable of displaying continuous plotting of relative changes in values of several parameters overlapped one above another. These charts are best suited for supply chain data visualisation projects. The D3 data container can handle millions of stereotyped records thus making it suitable for big data analytics academic projects for dissertation and thesis research studies. Smart contracts in Industrial closed Blockchains: Smart contracts and blockchains are difficult to be realised in laboratory environments. Thanks to the two popular frameworks, Hyperledger and Corda, minimalised prototyped environments are possible on Ubuntu 20 and above in laptops with 16GB RAM, at least 128 GB SSD, and at least i5 seventh generation processor. We have done quite a few projects on these two frameworks to emulate a blockchain prototype in laptop environment. The research studies, however, require programming efforts outside the blockchain to design application prototypes used by the blockchain peers running the chaincode clients. Blockchains do not allow automatic state changes pulled from external application views and databases for keeping data protection and integrity intact. We have used core Java as well as Spring Boot for communicating with Apache Active MQ to simulate IIoT transmissions into external application databases and generated views. Machine learning was used to predict anomalies in implementation of contractual terms (example, provenance anomaly) such that the external state change log can reflect them. For state change inside the blockchain, anomaly levels were pre-programmed in the smart contracts such that their recorded levels can be fed by the blockchain peer into the contract. If anomalies are reported by the blockchain peers, the blockchain can either reject the transaction or hold it for investigation. We programmed both the scenarios and explained the implications. Finite Elements Analysis: This scenario was executed to conduct loading of oceanic winds and high tide water thrusts on elevated modular coastal buildings. In this research, design of an adaptive and resilient coastal building construction was studied using finite elements analysis. This project investigated interactions between ecological forces and engineering resilience of a modular building model by creating a custom finite elements modeling solver. This project performed very well because it was executed in the style and class of a professional project. The building model was quite detailed done in Blender 3D. The software chosen for finite elements analysis was CSC's Elmer FEM. This is a free software having all the capabilities of a commercially acclaimed software such as Ansys. Ansys is normally the de facto choice for studies involving finite elements analysis. However, it puts a limit of 32000 elements on the 3D finite elements modeling mesh, which restricts the project size and scope. A commercial grade project is not possible using Ansys in dissertation and thesis research studies. Elmer FEM does not have any restrictions and it provides all the general mathematical solvers and tools for creating custom solvers. Hence, if the mesh is created in a good professional software (such as Blender 3D), the project class using CSC's Elmer FEM can be as good as that of commercial projects. Additive Manufacturing automation and Predictive Analytics for mutiple parallel 3D printing ordered from cloud-based network manufacturing: This scenario was executed to generate a cloud manufacturing based ordering and monitoring system of multiple 3D printing tasks. The application created was a web-based interface of three pages - a resources-based registration and viewing page, an ordering page and an order monitoring page. The workflow started with the resources-based registration page. In this page, 3D printers' models could be registered for five different pre-defined manufacturing tasks registered in a database. The specifications of the manufacturing tasks were drawn from real world 3D printing experts interviewed for this research. The backend was programmed on Java JDK 21, which is the latest stable version of Java. The frontend pages were created in HTML5 + CSS + JavaScript. The database used was PostgreSQL. The printers registered were also real world models suggested by the 3D printing experts. With the resources-based view completed, the ordering page was used to generate five different orders. It was assumed that in real world, the printers would be configured to send automated updates of printing-in-progress statistics using the JSON format. This research was a proof of concept design for an academic project. Hence, real 3D printers were not used. Instead a script was written for generating and transmitting printing-in-progress statistics to the order monitoring page in JSON format every minute. Five instances of this scipt were executed in parallel in five Ubuntu terminals. The order monitoring page was designed to parse the details of the JSON files received at an API, commit in the database (PostgreSQL), retrieve, and publish it in tabular format. The page could auto-refresh to show the new values of the latest JSON file appended below the older values. After the last JSON file is received, the time difference between the first and last JSON files received was recorded as print job completion. Line charts were used to show the trends of all the parameters transmitted by the scripts (acting as printers). The values transmitted were Nozzle Diameter, Layer Thickness, Layer Perimeter, Chamber Temperature, External Temperature, External Movement (of robotic hands expressed in X, Y, Z), Build Level (numerical value 1 to n, depending upon the number of layers to be built), Filament (Laser Energy) Feed, In Fill (deposition expressed in volume), In fill Percent (deposition percent of the total predicted mass), Material Flow (nozzle speed ml/s), Print Active (Yes or No depending upon whether the printing is in rest or in motion), and time taken (milliseconds). These parameters are stored as Parameter.GCode in real world printers. They were encapsulated in JSON format for this project. Other parameters of interest related to the printer's capabilities, such as the parameters related to the laser beam operations, are not stored in the Parameter.GCode file. It may be observed that the order monitoring was not only about the printing jobs in progress but was also about the operations and health of the printer while conducting the printing jobs. These values were suggested by the 3D printing experts. The application designed could display these details for all the five printers simultaneously on the order monitoring page. The last module of this application was a random forest machine learning algorithm. A large array of mock data of the JSON files was generated and the machine learning algorithm was trained. The predicted values were compared with the actual values. At about 150000 records, the F1 score was about 0.95 and the recall score was 0.93. The precision achieved was 100%. Predictive Analytics Functionality in Industrial Machines Maintenance: This scenario was executed to predict the maintenance schedules of machines based on the historical workload handled by specific machines. The older systems had fixed maintenance schedules for the industrial machines. However, manufacturing companies processing variable workloads based on dynamic demand patterns cannot fix their maintenance schedules. The machine schedules need to be as dynamic as the dynamic production schedules planned to meet the dynamic demand patterns. A literature review provided critical variables to be monitored for deciding the performance- and workload-based wear and tear of industrial machines. To get data on these variables, a focus group was conducted with operations managers tasked with machines maintenance working for sampled manufacturing organisations. Some of the key variables are temperatures, vibrations, pressures, operational times under high duress, wobbling or any unusual visible effects, oil levels, speed of processing, efficiency, and historical mean time between failures. The focus group helped in defining ranges for each of these variables for certain machines they have been using in 24X7 job shops. Based on this data, training sets were created seperately for four different machines described by the participants in the focus group. The random forest algorithm was chosen because of its high accuracy in multi-parameter predictions as suggested by the literature. The platform chosen was Java JDK 21, which is the latest stable version of Java. The APIs were published through Apache Tomcat. The scheduled sensory data communications from machines were simulated using a program called Postman. The random forest machine learning was designed in Weka to predict the values while the machines were running. The prediction for preventive maintenance was generated by a rules engine monitoring the predicted values continuously. Prediction advices for preventive maintenance were generated when the predicted values of at least three variables breached the limits established continuously for ten predictions. Based on the predictions, the operations managers would issue take-down orders and instruct the maintenance teams to conduct their routine maintenance tasks. At about 50000 records, The F1 score for this system was quite high (0.99) and the recall score was 0.98. The precision achieved was 100%. Predictive Analytics Functionality in Continuous Quality Monitoring of jobs completed by Industrial Machines: This scenario was executed to predict the quality statistics about the jobs completed by specific machines based on the historical data on quality statistics. With the help of literature review, the process of quality control and the main statistics of interest, a data set was created for training the random tree machine learning algorithm. The values of inspected defects (Boolean), schedules (in minutes), budgets (in US Dollars), job failures (jobs failing to complete; in boolean), and material failures (in boolean) were entered in mock data generated for the project. The data sets were created as having four quality levels: Poor, Fair, Good, and Excellent. In poor quality category, at least one material or job failure was recorded in five jobs showing schedules, budgets, and defects varying beyond the allowed range in every job completion. In fair quality category, there were no material or job failures and defects, but budget and schedule variations beyond the allowed range occured at least once in every five jobs completed. In good quality category, there were no material or job failures and defects, and either budget or schedule variations occured at least once in every five jobs completed. In excellent quality category, there were no failures or variations. With the machine learning trained with this data set, it was programmed to predict every quality parameter and the resulting quality level of the next job completion in a series of running jobs tested by it. The predictions made were of quality control parameters and the expected quality category of the next job in progress. The predicted and actual quality categories were plotted for direct comparison. Although the system was designed to be very dynamic making predictions before every job is completed, the system proved to be quite accurate. The F1 score was only 0.80 in the first 1000 records, which improved to 0.90 in 10000 records, and then to 0.95 in 25000 records. The recall score also improved as the number of records were increased gradually. In the end, this machine learning prototype was recommended as a useful solution for continuous quality monitoring, which is widely discussed as the feature of Industry 4.0 in modern manufacturing plants. Continuous quality monitoring can be done independently for every machine in operation. The historical data of one year can be used to predict quality monitoring plots of next three months. This prospect was discussed although not programmed and demonstrated. The software used was Java Weka package and the records were fed into PostgreSQL database. Recommendation Engine for Industrial Machine's Job Description Generation using Machine Learning: This scenario was executed to generate job description for a machine's shop job based on historical data about the jobs successfully executed by the specific machines. In this research, the capability of machine learning to predict values in tabular format after reading several identical tables was tested. Only Random Forest machine learning algorithm can predict multiple values of a table by reading those values in identical tables fed to it. The tables in this project were templates of machine job descriptions. Such templates are defined in production scheduling software and hence have identical formats. Smaller manufacturing plants may be doing the same in Excel sheets. In any case, tabular data can be exported to comma or tab separated values. The training data prepared comprised of files having data in comma separated values. A Java program was written for pre-processing of the data for combining the data in .CSV format into a training data format. The training data was compiled for four different machines. Separate training was conducted for each machine to generate predicted values for each machine separately. A rules engine was created to present to the job designer the recommended values on selection of the machine ID. This project demonstrated the ability of machine learning to use the historical performance data of a specific machine for creating the next job description. The variations in data are taken into account such that the job designer puts reasonable load on the machine and sets reasonable expectations from it. The knowledge of the job descriptions and the machines was collected from operations managers through an action research approach. The data thus collected was used to create the training data sets for four machines. Machine-Learning based workload orchestration project for multiple cloud hosting of an application using user-defined parameters: This scenario was executed to demonstrate multi-cloud orchestration in an application overloading scenario when a machine learning system recommends the API gateway to change routing based on the current loading patterns. This design did not use any industry standard orchstration engine and hence was much more flexible. This system was designed to optimize data fetch performance by subscribers connecting to a common data source. This data source was configured as an API gateway to run GET operation. It had five API targets for fetching the same data. The five API targets were configured as five instances of the application running on five clouds having their databases synchronised. The performance of data fetch was captured and fed to a training data set used to train a decision tree algorithm. The decision tree simply made predictions of the fetch response times of the five API targets. Using the predicted values, a Java code was written to advise the API gateway on choosing the next API target for executing data fetch operation. This design was created to ensure that a subscriber would use an API gateway expected to perform the best given the current loadings of the five API servers. This solution was demonstrated as an orchestration mechanism working on multiple clouds without using their native orchestration system. The entire programming was done in Java Spring Boot using the JDK version 11. the machine learning algorithm was programmed in Weka package. The research topics and proposals of the above scenarios were recommended by us. Please visit our page on topic proposal development for more details. Dear Visitor, Please visit the page detailing SUBJECT AREAS OF SPECIALIZATION pertaining to our services to view the broader perspective of our offerings for Dissertations and Thesis Projects. With Sincere Regards, Sourabh Kishore. Copyright 2020 - 2026 ETCO INDIA. All Rights Reserved |