Scientific Writing on computer science based on the outlines below and more background material provided: 1) Paper 1 There has been an exponential increase in the application of machine learning in cyber-physical systems such as smart grid, transportation and healthcare systems. These safety critical applications have necessitated the need for further studies in reliability and robustness of the models used in these settings. Specifically, given an input x and a target classification t, an adversarial example, x’, can be found that is indistinguishable from x while recognized as t. In addition, x’ can be classified as none of the other classes in an untargeted attack. The vulnerability of AI models to adversarial examples necessitates more examination of their impacts and study of potential countermeasures to mitigate the security impacts. Adversarial attacks can be categorized by the scenarios in which they are found, such as black-box, white-box, and gray-box attacks. They can also be classified by the target type, such as untargeted versus targeted attacks. There are also many adversarial example generation methods by which they can be identified including gradient-based, score-based, transfer-based, decision-based, and others. While there is a body of works in adversarial examples, the emphasis on the generation leads us to also examine other effects such as overfitting and lack of regularization on the ML / DL models that also cause misclassification and make them susceptible to adversarial examples. Considering the range of scenarios from when an attacker has full knowledge of the model architecture and parameters (White-box) to the other extreme where the attacker has no knowledge of the model architecture and parameters (Black-box), crafting countermeasures to adversarial attacks can be challenging. The use of adversarial examples to perturb inputs causes the model to make incorrect predictions. They are challenging to mitigate because of the three characteristics of adversarial examples: transferability, where adversarial examples generated for one model will impact another model, adversarial instability and regularization effect- where a small sample added during training makes the model more robust to adversarial examples. This makes the model more resistant to model poisoning, evasion, and model extraction attacks. In the paper, it is shown how the input data poisoning using adversarial examples affects the training accuracy with accuracy declining from 93% to 55%, close to an even odd, when 30% of input data are poisoned. Some possible defenses include data-based approaches such as gradient hiding, data-compression, adversarial training, transferability-blocking, and randomization. Model-based approaches include deep contractive networks, feature squeezing, regularization, defensive distillation, and mask defense. Auxiliary model-based approaches deploy additional tools such as another model, such as an encoder, to identify adversarial examples. Sophisticated attacks: when an adversary implements more complicated attacking strategies. These attacks are more resistant to defenses. Using the framework in other CPS than smart grid. Adversarial training as a defense is computationally expensive. Implement additional attacks to evaluate robustness and defenses. 2) Paper 2 The Edge AI Empowered predictive analytics for smart healthcare encompassed machine learning algorithms deployed on edge AI devices for predictive analytics in healthcare. It explores the safety and reliability aspects of the deployment of AI and ML in healthcare systems on the edge. The study proposed a 3-Layer architecture and privacy preserving learning and carried out an evaluation of a healthcare CPS deployment using a publicly available dataset. The proposed Multimodal federated learning (MMFL) uses distributed training on diverse healthcare data sources while allowing client management, data partitioning, enhanced scalability, and different data modalities. The model comprises the Remote Service Layer for the medical device companies and hospitals that carry out model training and deployment. The Edge Collaborator Layer consists of edge servers and communication media such as Wi-Fi, and 5-G to support end-users and devices. The Edge End Layer comprises wearables, home devices, and devices deployed for remote monitoring and treatment. Support for decentralized training through federated learning preserves the privacy and integrity of patient health data in this layered architecture. The proposed Edge AI architecture is evaluated using a publicly available dataset containing 100000 records of drug reviews, ratings, and conditions. Smart healthcare integrates management, providing a platform for caregivers, patients, and others to connect, diagnose, treat, monitor, and process data. Machine learning in healthcare involves models that take input and produce outputs based on patterns learned during training. The models such as convolutional, recurrent, graph and attention-based neural networks are well suited to handle both time series data and medical image data commonly processed in healthcare —- Requirements: Typeface and font size should be uniform throughout the document. The freelancer should use a serif type face and 12 point font. Each paragraph must contain at least two sentences. There also must be at least two sentences of text between each heading level. Use 1‐½ inches on the left margin and 1 inch on the rest of the margins. Line Spacing Double‐space the heading and main body of the text. Double‐space, not quadruple‐ space, between main headings and subheadings and between headings or subheadings and text. The freelancer should include an abstract of up to 350 words. And the following major sections: Introduction Literature Review Methods and Materials Results Discussion Also each major section may be divided into second, third, fourth, and fifth level sub‐headings to emphasize specific aspects of the writing. Chapters or major sections are to start on a new page. Provide a list of references at the end.