Kanda Software: Custom Software Development Services https://www.kandasoft.com Custom Software Development Company Wed, 17 Feb 2021 20:10:00 +0000 en-US hourly 1 What is a Clinical Decision Support System (CDSS)? https://www.kandasoft.com/what-is-a-clinical-decision-support-system-cdss/ Wed, 17 Feb 2021 19:50:50 +0000 https://www.kandasoft.com/?p=37673 Software designed to help physicians’ sort through the complexities of making clinical decisions has been around for a long time — since the 1960s in fact. Unfortunately, the stop-start track- record of clinical decision support systems (CDSS) largely discouraged adoption. But more recently, with the increasing viability of deep learning and now the wake-up call from the ongoing Covid pandemic, the need for CDSS is more and more widely acknowledged, and the demand is rapidly increasing for systems that can augment what clinicians do well, and provide valuable guidance on things they don’t…

What is CDSS?

Clinical decision support system (CDSS) is an umbrella term that encompasses a broad range of functions and technologies. Functionality may extend from symptom checkers to medical imaging analysis.

Whatever the exact feature-set, at the core of every CDS solution is the representation of, and reasoning about, medical knowledge. They help clinicians by making sense of patient health records, symptom reports, and volumes of clinical data, distilling it all into actionable suggestions for patient evaluation, care, or treatment. CDSS allows care providers better cope with and manage existing knowledge and newly acquired data, identifying patterns that result in recommendations and ultimately better patient outcomes, more personalized care, fewer mistakes, and more deterministic care models.

By some estimates the volume of healthcare data around the world doubles every 72 days. Corralling that data to produce useful insights is a task well-suited to AI and CDSS. The still-evolving Covid pandemic has further highlighted the need to manage and evaluate high velocity data that can provide clues to treatment, prevention, and diagnostics.

The right software doesn’t usurp clinician control but does help bolster a conclusion or suggest reassessment by providing a second opinion.

Typically, CDS applications fall into one of two categories, either rule-based or machine-learning.

Choosing either or both of those paradigms is just one of many foundational design decisions that have become apparent from hard-learned lessons in an application space that is now finding significant traction in the real-world.

Early Systems

Among the first CDS platforms was the Leeds abdominal pain system. That was the late 1960s at University of Leeds in England, and the algorithm used Bayesian probability theory to infer possible diagnoses from reported symptoms.

More recently, researchers at Stanford University in California developed a system that produced successful therapy recommendations 69% of the time, about the same rate as clinicians. But not all that recently — that was in 1972, in an application called MYCIN, which was a rules-based expert system built partially with LISP, in an era when LISP based expert systems were the de facto standard.

Along with the success of MYCIN, expectations for AI rose in the public imagination, but that only made the disappointment of the next few years all the more striking. The difficulty of translating human expertise into computable rules was largely a manual process and follow-on expert systems, across multiple verticals, were unreliable at best. By 1975, with hardware not yet powerful enough to make neural networks practical, enthusiasm for AI dried up, and funding along with it.

Still, many lessons were learned from early CDS efforts, some that still apply today.

Market Survey 2021

Fast-forward to today, and the hardware is fast catching up to the ambitions of AI software developers. If you’re exploring a CDSS development effort, take a look at some of the solutions in use today.

First Databank gives physicians informative messages through alerts within existing applications. It’s currently in use at thousands of ambulatory care facilities worldwide, delivering up-to-date drug information to physicians for active clinical decision support.

Medispan embeds drug reference knowledge into existing healthcare systems to support safe medication decisions to reduce the potential for drug prescription errors.

Allscripts offers clinical decision support tools for various physician care units including acute, ambulatory, emergency and surgical care. Allscripts focuses on providing physicians with cost-effective, interoperable clinical decision support through nearly 800 clinician-reviewed Care Guides.

Cerner clinical decision support software uses a nationally vetted set of evidence-based standards and criteria to give clinicians reliable guidance to ensure patients receive the proper treatment for their specific needs. Cerner offers clinical decision support for a range of healthcare services from advanced imaging and radiology to mobility and clinical workflow tools to allow for accurate ordering and prescription.

Elsevier offers a suite of clinical decision support tools to aid clinicians at the point of care. Their evidence-based medicine and prescription information provides clinicians with answers to any clinical questions, as well as drug decision support, predictive data analysis, and online training.

Truven Health Analytics offers hospitals evidence-based clinical decision support resources designed for integration into existing hospital EHR systems through APIs.

Zynx Health evidence-based tools offer clinicians information and workflow suggestions and facilitates collaboration between stakeholders and clinicians to improve clinical and financial outcomes.

What Developers Should Know

The push for Clinical Decision Support Systems has been reinvigorated by wider availability of clinical data, wider adoption of digital records standards, and a new enthusiasm and recognition for the systems.

Adoption of EHRs is not the hurdle it used to be. International adoption is commonplace, most extensively in Norway, Finland, Singapore, United States, and Iceland, but to a significant extent in most of Europe and much of Asia…

It comes down to patient outcomes, that’s true, but entails some collateral benefits like reducing misdiagnosis, cutting medication effects, and potentially avoiding unnecessary and costly procedures.

The physician runs the show and makes the decisions, while the software supplies insights and additional data points to augment the human expertise. Far from replacing the clinician, the AI takes workload off them so they concentrate on what they do best, and more importantly, on the patient.

]]>
Challenges of AI adoption in Healthcare https://www.kandasoft.com/challenges-of-ai-adoption-in-healthcare/ Tue, 19 Jan 2021 18:15:04 +0000 https://www.kandasoft.com/?p=37107 With so many high-profile successes touting the promise of AI in healthcare, developers can too easily underestimate the challenges that must be overcome, and the diversity of expertise required, to bring products to market.

By diving headlong into the market, a developer risks becoming a cautionary tale unless they start with a comprehensive grasp of the potential barriers, challenges, and pitfalls that stand in the way of deployment.

Although healthcare organizations have been notoriously slow to adopt AI, venture capital investment in healthcare AI has been steadily increasing. In January 2020, total investment in the top 50 developers of AI solutions in healthcare hit the $8.5 billion mark, according to McKinsey and Company.

That investment interest is driven by some spectacular successes of AI in multiple healthcare segments. From detecting anomalies in clinical images to enhancing diagnostic decision-making and augmenting robotic surgery, AI has frequently performed as well or better than clinicians.

Using deep neural networks (DNNs) for image classification, AI has already proven its ability to quickly and accurately detect fractures from x-rays, tissue anomalies like lesions and tumors from CT and MRI scans, and from lung imaging, the signatures of infectious disease like TB and COVID-19.

Accuracies over 95% are not uncommon in controlled settings, and while it’s often more accurate than a clinician, AI is always faster.

That all paints a pretty rosy picture for the value of AI in healthcare. It’s all true, though — at least at the 50,000 foot level. But there are serious caveats, and as we dive into the details, we’ll discover that the real picture is far more complex. We’ll see that many of the most hyped successes come from controlled environments, and the same solutions don’t always perform so well under the stress of a clinical workflow.

From the start, developers who plan to build AI solutions for healthcare will need a deep understanding of the state of the art, the intricacies of the market, the numerous regulatory hurdles to implementation, and the flat truth about real barriers to adoption.

Characterizing the AI Challenges in Healthcare

With the potential to drive more efficient resource usage, to improve patient comfort and safety, and to reduce the need for more invasive procedures, AI is becoming increasingly attractive on both the clinical and business sides of healthcare.

But if you’re inspired by the market potential, there’s an unavoidable reality-check in order that may have exactly the opposite effect.

The data required to train underlying neural networks, the frailties of the training process itself, and issues surrounding certification all multiply the complexity of developing AI software for healthcare.

If that’s not daunting enough, successful deployment requires navigating both the myriad regulatory requirements and the often conflicting goals of numerous stakeholders.

Data for training, and lots of it

To train models to perform clinical tasks requires data that consists of health records or images that have already been examined and labeled by clinicians. To be effective, tens or hundreds of thousands of data instances can be required for training input. What’s more, the data must be digitized in a uniform format, captured within exacting guidelines, and sometimes ‘normalized’ or otherwise preprocessed to smooth values prior to training.

In many instances, the big problem is simply availability.– large patient records or diagnostic image sets are often ‘owned’ or controlled by some of the bigger players in healthcare. So historically, gaining access has been a major hurdle for startups.

The good news? More source data is becoming accessible to developers through initiatives designed to address this very issue.

Medical images are increasingly available thanks to organizations like NIH, OASIS, and OpenfMRI. And more effort across the industry is going into formally collecting and labeling images while adhering to standards like DICOM that support both a uniform imaging standard and metadata for annotation and labeling.

Training AI models for the Real World

Of course, acquiring the right data is only the first step. The feature recognition abilities that neural networks ‘learn’ by processing data are encoded in models that must be trained under exacting conditions. Even if you have access to a large annotated dataset of the right data, training the model is a process rife with potential pitfalls.

The risks are well known, but not always straightforward to manage. One risk inherent to the training process is overfitting. That is, training a model to be extremely sensitive to the sample image datasets used to build and test it — to such an extent that it fails to generalize, and underperforms against new images it sees in production.

Running your solution in a clinical setting presents new challenges to the integrity of lab-trained models. A model that performs successfully in a test environment is one thing, but training it sufficiently to withstand the demands of clinical workflow requires another level of sophistication. Models will always perform better against training data than against new data in the field, but they can often fail entirely to adequately generalize their tasks when put to the test.

The good news? More pretrained models are becoming available in some domains from companies like NVIDIA, whose Clara solution offers developers numerous tools to accelerate development of clinical AI solutions.

Nevertheless, the process of training models is a deep topic, and in practice, building successful DNN models typically requires the expertise of a senior-level, experienced data scientist. And eventually, fine-tuning in clinical trials.

Deployment and approval are far from routine

For all the glowing anecdotes about the performance of AI against clinicians, it’s important to understand the context, as well as the realities of premarket approval, certification, and deployment.

To date, many of the most dramatic successes have come in controlled environments. To one extent or another, many haven’t gotten beyond clinical trials or proofs of concept.

This will change, but many stakeholders continue to be rightly concerned about how well AI solutions will perform in demanding clinical environments, where applications may encounter images that confuse the models, or expose variances not seen in testing.

Until then, developers can take cues from the FDA, to understand how they view their role in regulatory oversight of AI.

Premarket Pathways

If your experience includes developing non-AI healthcare applications, you may know that the FDA has historically treated software applications like medical devices for purposes of premarket approval. And perhaps you’re familiar with the FDA’s traditional premarket pathways — premarket clearance (510(k)), De Novo classification, or premarket approval.
But AI presents some non-traditional challenges, especially when adaptive learning algorithms are a component of an application in a clinical setting. Afterall, ti’s a defining feature of AI systems that they continue to learn from new data processed in deployment, and the FDA has taken steps to recognize and embrace the dynamically evolving nature of machine-learning.
In April 2019, the FDA began offering some new ideas to better support the unique qualities of AI in regulatory oversight. They published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback” The paper articulates the FDA’s suggested new approach to premarket review for machine learning-driven software modifications.
Among other things, this new regulatory paradigm suggests they will adopt a full product life-cycle approach to the oversight of AI applications, and potentially require ongoing monitoring of deployed solutions by both the developer and the FDA.

Dealing with Regulations and Reality

If the data, training, and approval challenges aren’t onerous enough, consider that successful deployment also requires navigating the myriad regulatory requirements. In the real world, you’ll also have to contend with the often conflicting concerns of multiple stakeholders, and address them proactively.

Every layer of the solution must be designed with real world considerations in mind:

  • Data privacy and security statutes
  • Regulation – HIPAA, ISO, and conforming to HITRUST’s CSF standards
  • Stakeholder concerns – clinicians, administrators, insurance, facilities, and IT

Some of these configurations are now built into AI toolkits and frameworks that are intended for use in healthcare, but none are optional.

The Bottomline

If healthcare solutions are part of your AI product roadmap, there’s no getting around the requirement that you must maintain and constantly update a realistic picture of the things that matter most — the state of the market, the emerging and evolving regulatory challenges, and the expanding array of tools and data available to developers, both you and your competition.

Overcoming these challenges requires a well thought out, multi-pronged approach, along with the right strategic partners in both sales and healthcare software development.

Even though adoption of AI across the healthcare landscape is gradual and uneven by its very nature, there’s no doubt it’s increasing, and resistance in the marketplace is less and less about fears of AI replacing human expertise. In fact, clinicians are beginning to appreciate the ability of their AI counterparts to process images thousands of times faster, with comparable or better accuracy, and to handle some of the tedious and time consuming tasks overwhelming current resources.

Ironically, by freeing clinicians to handle higher level tasks, those same capabilities may be what brings patients and physicians back together, and what ultimately makes healthcare human again.

]]>
AI in Clinical Image Analysis: Emerging Opportunities https://www.kandasoft.com/ai-in-clinical-image-analysis-emerging-opportunities/ Wed, 16 Dec 2020 19:48:34 +0000 https://www.kandasoft.com/?p=36448 In Healthcare and Life Sciences, Artificial Intelligence (AI) seems perpetually poised for a dramatic breakthrough – forever on the verge of toppling barriers of acceptance and adoption.

Recently, a number of factors are increasing the pressure on health systems to accelerate adoption of AI-powered image processing to assist clinicians with detection and diagnosis of disease.

AI isn’t just one technology, it encompasses a wide array of algorithms vying for market share in healthcare. As 2021 approaches, a number of opportunities are emerging for application developers and technology buyers alike. Due in part to the convergence of the Covid-19 outbreak, a growing shortage of diagnostic clinicians, and maturing machine-learning toolsets, AI-powered image processing is quietly becoming a frontrunner in the race to wider adoption.

AI has demonstrated a remarkable ability to detect anomalies in medical imagery that even human clinicians can’t see. These are image classification applications, powered by deep neural networks (DNNs), that require models to be trained from large image datasets. But once they’re trained in focused diagnostic use cases, they routinely outperform their human counterparts in both speed and accuracy of detection.

Those facts alone suggest AI will help drive earlier detection, reduce diagnostic errors, and ultimately liberate clinicians from the more tedious, time-consuming tasks in the radiology workflow.

Many successes to date have been in the context of clinical trials and proofs of concept. But the underlying technologies, market perceptions, and regulatory requirements are continually evolving, and for developers willing to take the plunge, the flow of venture funding to healthcare AI is only accelerating.

Sizing up the market

Healthcare and Life Sciences organizations have been notoriously slow to adopt AI solutions, even as other verticals eagerly put AI at the hub of their technology strategy.

Still, venture capital investment in healthcare AI has been far less hesitant. According to Rock Health, a health technology venture fund, just under $2 billion was invested in AI healthcare solution developers in 2019.

McKinsey and Company notes that in January 2020, total investment in the top 50 developers of AI solutions in healthcare hit the $8.5 billion mark.

And recent pre-COVID forecasts suggest that the market, for AI-driven image diagnostics specifically, will be $1.5-$2 billion by 2024, up from around $400 million in 2019. But the COVID-19 pandemic appears to be accelerating that investment trend. And it’s noteworthy that AI imaging solutions have already been deployed to help identify the presence of the virus in lung scans.

In parallel to the admittedly slow but improving adoption by the healthcare industry, the overall AI investment trends are promising. There’s increasing interest from venture capital, and that’s important because developing AI solutions for healthcare means facing a gauntlet of challenges that require adequate funding from the start.

Understanding payoffs for Healthcare and Life Sciences

Despite the hurdles to implementation, the speed, accuracy, and sensitivity of AI image analysis can significantly streamline the radiology workflow, reducing the total time consumed at every step in the process while promoting patient comfort and safety along the way.

The benefits typically touted by clinical AI advocates are something like:

  • Improving patient outcomes
  • Improving efficiency and lowering costs
  • Reducing diagnostic errors

That’s about as vague as it gets, so it’s worth a closer examination of the mechanics that can actually bring the marketing-speak to fruition.

For instance, an important hot button for healthcare organizations is misdiagnosis. Up to 10% of patient deaths are the result of diagnostic errors, and somewhere between 3% and 6% of image analysis by radiologists contain clinically significant errors.

The accuracy of AI has the potential to greatly reduce costly human errors in both detection and diagnosis. That can result from a workflow that uses AI to pre-screen images and flag potentially urgent issues, or from post-screening images to catch anomalies missed by clinicians.

Lesser Known Benefits

Some specific mechanisms innate to DNNs also contribute to the realization of better patient outcomes, lower costs and improved efficiencies.

Among other artifacts of DNN algorithms is their extreme sensitivity to very subtle differences in values across datasets. Translation? When the data in question is a medical image, DNNs have an uncanny ability to differentiate anomalies from anatomy in even very low-contrast images.

That trait produces some key follow-on benefits, perhaps none as dramatic as AI’s speed and accuracy, but no less compelling:

  • Earlier Detection
    AI can see things even the most experienced clinicians can’t. A recent NYU study found that a clinician assisted by AI performs better than either one on their own, when it comes to finding anomalies early in mammograms. That includes reducing false negatives that can cause critical delays in treatment.
  • Reduced Scan Duration
    Shaving time from the radiology workflow doesn’t just happen after the image is captured. The ability of AI to differentiate anomalies from anatomy in low-contrast images can enable a significant reduction in the duration of the scan sessions and therefore in an associated radiation dose.
  • Reduced Radiotracer Dosage
    Imaging technologies require patients to ingest contrast agents to help clinicians spot differences between tissue. In PET scans radio tracers are injected into patients to provide contrast between different types of tissue. But AI can enhance contrast digitally, enabling a significant reduction in both the dose of contrasting agents and radiation from the scan itself.
  • Reduced Need for Invasive Procedures
    False positives can trigger a needless order for tissue samples or more expensive and invasive procedures. The pathologist’s mantra, tissue is the issue, has been the trusted rule when it comes to confirming a diagnosis. Until now, no other technology has had the same potential as AI to disrupt that. Some imaging systems have been characterized as providing ‘virtual biopsies’, providing confident diagnosis, while foregoing additional costs of invasive procedures.
  • Freeing Up Resources
    Obviously, faster image processing has the effect of freeing up clinicians for other, higher level tasks. Just as importantly, the ability to get sufficient contrast from shorter scans reduces the demands on imaging devices, giving administrators and management finer control of scheduling, and reducing the wait time for patients.
  • Clinician Wellness
    Multiple studies suggest that radiologists feel their workload is increasing. Reducing demands on radiologists and pathologists, especially the most tedious and time consuming tasks, can reduce stress and increase job satisfaction and retention.

Building Value into Your Solution

Consider these benefits in the context of your solution design and feature set. A clear understanding of how the customer might assess value will help focus your solution on what matters. AI that can literally replace or reduce costly processes in healthcare, or directly bolster the bottomline, will of course command a higher price and higher priority in the marketplace.

To better understand how AI fits into real world settings, it’s worth a closer look at some specific uses of AI-powered imaging at the point of care.

Use Cases

By now it should be clear that deploying AI doesn’t replace experienced radiologists. Its role is to reduce their workload, provide support in the reading room, and help catch issues missed, or misidentified, by the clinician.

That said, more than one expert has cautioned that radiologists who use AI will replace radiologists that don’t. In an age where radiologist shortage is adversely affecting care and increasing critical delays in detection and diagnosis, it’s increasingly difficult for healthcare decision makers to ignore the potential to support radiologists with a competent and tireless technology like machine-learning.

Sample applications

As impressive as the success stories of clinical AI can be, their performance isn’t uniform across all diagnostic contexts. AI is just better suited to some use cases than others. At least for now. But in the right scenarios, it can be game-changing.

AI can detect everything from fractures to tissue anomalies like lesions and tumors. Let’s look at some examples of the most promising use cases.

The Path to Implementation

Although AI is still in its infancy in healthcare, the competition is stiff, and often well-funded, and the barriers to entry remain high for all players.

Understanding the market opportunity, the potential use cases, and how the industry measures benefits is a good start to making informed product decisions. Another foundational insight is recognizing the many diverse disciplines required to develop products in this arena. That’s why the key to success lies, in part at least, in finding the right channel and development partners and leveraging existing tools to accelerate development.

The value of Channel Partners

Connecting with channel partners who have strong, established relationships in healthcare can jumpstart the new relationships you’ll need down the road. The channel both represents your product after development, and helps provide insights into their customer base upfront. They can help take the guesswork out of design by identifying real-world needs and stakeholder concerns that inform your product decisions from the beginning.

Some of the big players in AI also offer partner programs specifically for healthcare developers — Google, Microsoft Azure, and AWS can all help promote solutions built on their platform.

They may also be able to assist with the logistics of clinical trials and planning the path to regulatory approval.

If you build the right product, your channel partner has every incentive to assist with all aspects of implementation. About the only thing channel partners won’t do is develop your application.

Partnering with Experienced Healthcare Developers

Launching development with the right partner from the start is critical. That doesn’t just mean a development team with experience in AI, but one with the expertise and track record in AI, in healthcare, and in the agile and rigorous devops practices that ensure everything remains on track.

Tools that Jumpstart Development

By now the landscape of tools and data needed to build fundamental AI capabilities is both broad and deep. Libraries like TensorFlow, and libraries built on top of it, like Keras and TFLearn, can greatly accelerate development by handling the more tedious low-level processing of building neural networks.

Tools like NVIDIA DIGITS can accelerate training of deep neural networks (DNNs), particularly those used in image classification. In 2019, NVIDIA also introduced Clara, a set of development tools for AI that are specific to healthcare and even address HIPAA and similar regulatory concerns.

The Bottomline

Clinical image analysis is hardly the only development opportunity in healthcare AI. But for all the reasons discussed, it’s reaching an interesting convergence point on both the buyer and developer sides of the equation.

The allure of the marketplace and the increasing availability of development tools doesn’t mean there aren’t plenty of implementation and regulatory challenges. Among them, managing the intricacies and conflicting goals of multiple stakeholders and building relationships that encourage buy-in.

And despite all the toolkits, libraries, and healthcare-specific frameworks emerging into the marketplace, there’s no substitute for building a strong multi-disciplinary team with all the necessary expertise on board.

Any product strategy that values speed to market requires finding a development partner with deep experience in both AI and healthcare — one that comes pre-assembled, and ready to hit the ground running.

]]>
Faster Container Deployment with CI/CD https://www.kandasoft.com/faster-container-deployment-with-ci-cd/ Wed, 02 Dec 2020 17:57:23 +0000 https://www.kandasoft.com/?p=36252 Containerization is quickly becoming a preferred way to develop applications. Using this technology has given developers the ability to turn a monolithic codebase into functional parts, giving them the ability to more rapidly deploy new code, test it, and keep it segregated from other application parts. The idea is that a container contains code that runs a specific application function, so if it fails or needs updates, only a portion of the application is affected. Using containers to deploy code has numerous benefits, but adding continuous integration and continuous delivery into the development lifecycle adds even more benefits especially in a fast-paced environment where features must be deployed rapidly.

Using Continuous Integration in Containerization

Continuous integration (CI) is the simplest way to begin automation of the development lifecycle. In a traditional environment, developers change code, commit the code to a repository, and test it later. With CI, the developer’s code is automatically built and tested (your organization can still work with manual testing in addition to automated testing). By adding this simple automation step, developers know if their code is responsible for the code breaking and can fix it quickly before it goes to staging or production.

CI works with monolithic codebases too, but it has several additional benefits when it’s used in combination with container code. Some benefits of CI include:

  • Reducing the time to release code: Because the process is automated, code is automatically tested and bugs fixed more quickly before it’s deployed to production. Better testing reduces the number of delays experienced by clunky code deployment steps. Also, because code is tested immediately, developers no longer need to determine who introduced the bug so that they can fix it.
  • Reduced outages: Bugs introduced to the main production codebase can cause errors, ruin user experiences, corrupt data, or even cause major downtime. With automated testing and CI, bugs and downtime are reduced. If you must upload a critical patch, CI reduces the time it takes to go from development to production.

Using Continuous Delivery

Continuous delivery (CD) is also automation but it’s injected in a different step in the development lifecycle. CD takes away much of the manual deployment time and does it for you. You can take it a step further and add automation to deployment from staging to production. With a delivery model, one of the developers still must click a button or interfere with the process and trigger automation to upload code. In a continuous deployment model, automation completely takes over and deploys code from testing, staging and then to production.

CD gives you several benefits:

  • Faster delivery of code: Instead of waiting weeks for testing and a scheduled deployment, CD takes code and uploads it to production after it’s been tested, reducing the time it takes to go from development to production.
  • Fewer human errors: With manual promotion to production, it’s common for errors. For instance, if you forget to add an environment variable to the server, the code will fail. These failures can crash your software and interrupt service.

Adding Kubernetes

When you’re working with automation and containers, you’ll run into Kubernetes. Kubernetes is the standard in development deployment for containers. Kubernetes is referred to as an “orchestration” tool, which is basically another name for CI/CD automation. It will orchestrate the continuous delivery of containers including configurations, images, and the applications that load within nodes.

It’s important to consider containers as their own environment that runs an application. The container runs the physical server’s operating system unlike traditional virtual machines where any operating system can run on a hypervisor. It’s not uncommon to deploy a container on a virtual machine running a specific operating system. The advantage is that containers isolate your program from other applications and give developers analytics and feedback on that particular component of an application.

Kubernetes fits in with CI/CD as a primary tool for container automation. With Kubernetes, developers can:

  • Scale container resources up or down depending on resources required to run applications
  • Automate deployments including rollbacks after failures
  • Automated configurations of node clusters

Note the third benefit. Containers work in clusters where you can provision a number of nodes specific to the amount of resources necessary to run the application. The major cloud providers such as Google and Amazon have services that will optimize container usage so that resources aren’t wasted. It’s possible to spend more money using containerized services than using virtual machines if your resources aren’t properly configured and optimized.

It’s important to note that Kubernetes and Docker are not the same terms. When developers first get into containerization, it’s common for them to get the two terms confused. Docker is the company that made containers popular. It’s a technology for deploying containers and building files (Dockerfile) that determine the way the container image is created. Kubernetes is the CI/CD orchestration tool to automate deployment of containers.

Conclusion – CI/CD is Key to Container Deployment Optimization

Like Jenkins for standard software continuous integration and delivery, Kubernetes is the automation tool that will speed up deployment of software projects and allows developers to automate much of the manual processes during promotion to production. Containers bring several benefits to the development lifecycle, but CI/CD tools such as Kubernetes is what speeds up development.

Whether you use containers in your development lifecycle or stick with virtual machines and traditional architecture, CI/CD speeds up deployments and reduces the number of bugs introduced to production. It’s core to your automation strategy when containers run critical applications that must maintain 100% uptime.

If you need help with Application Containerization or DevOps Services, check out Kanda’s related services.

]]>
Going From Traditional IT to Modern CI/CD https://www.kandasoft.com/going-from-traditional-it-to-modern-ci-cd/ Fri, 20 Nov 2020 15:56:40 +0000 https://www.kandasoft.com/?p=36025 For an enterprise with siloed operations and development, it’s not uncommon for there to be friction between the two departments. Developers create software, but they need collaboration from operations to open ports, provision resources, or provide permissions on various servers and environments. When operations and development clash, disagreements delay productivity and deployment of software. To alleviate much of the contention and speed up software delivery, DevOps with modern CI/CD creates a collaborative environment where the two teams merge and work together to provide a better end-product.

Making the Switch to DevOps

Changing culture among teams is a delicate process. You can create a DevOps team from new hires or work with people already a part of the business. Whichever direction you decide to take, the team you create will be different culturally from the original operations and development teams. The change is for the better and most teams work much better in a DevOps environment rather than siloed.

Agile is a common factor among DevOps teams in the enterprise. This change might be standard in development, but operations staff are not typically working in an Agile environment. When you create a team, Agile will be a part of the process to help facilitate better communication and build on an operating standard that allows for changes and better deployment of software in a faster time frame.

CI/CD is at the Heart of DevOps

Traditional IT uses change control and testing resources, but the heart of DevOps is the CI/CD pipeline. This pipeline alleviates much of the manual commitment required by software developers. In traditional IT, developers work with operations to configure resources, and then at least one developer is designated as the deployment manager. Any deployments may or may not require approval signatures, but the bottleneck is the manual process necessary to deploy software.

In addition to manual deployment, developers must test their code in a testing and staging environment. For each step from testing, staging, and production, someone must deploy code to each of these environments. This takes the time of one developer or the organization must hire a full-time employee to take care of deployments.

Freeing developer time and speeding up deployments is where CI/CD excels. It’s the main component of DevOps that attracts development teams to it. The first step in the software development pipeline is development to a testing environment. This step is the easiest to automate and usually where DevOps begins its changes.

After developers finish making changes to code, it’s checked into a repository. If developers check in buggy code, it usually isn’t found until after testing which could be days or weeks later. With continuous integration (CI) tools, developers can automate the process of building the codebase after changes are made.

Testing automation can be done by several tools. Popular tools include Selenium (for browsers), GitLab CI, Jenkins and Travis CI. These tools will build the codebase and automatically deploy it to a testing or staging environment if necessary. If bugs are found, the developers receive an alert so that they know changes must be made. The advantage of using CI automation is that bugs cannot be introduced into the codebase and left to persist until it’s too late. Developers are notified immediately so that they can fix the bug before it’s sent to production.

Most development environments have a staging environment as well, and it’s the same process when you move to DevOps. CI tools can be used to automate delivery to a testing environment and a staging environment where a quality assurance (QA) team can test it. Testing in the staging environment is a combination of manual and automated testing. In DevOps, you can often skip a testing environment and move code directly to staging.

After code is tested, it’s time for promotion to the production environment. There are two types of continuous promotion to production: continuous delivery and continuous deployment. Continuous delivery tools provide users with an easy way to promote to production. All activities (e.g. backups, code promotion, SQL scripts, etc) are automated using delivery tools, but a user must still trigger the action. For instance, most tools have a button users click to start the promotion. With continuous deployment, the entire process is automated. Code is automatically taken from staging after being tested and deployed to production.

Some of the tools that do continuous integration also perform continuous delivery and deployment. Jenkins, Travis and GitLab will automatically deploy to production. Octopus is a tool dedicated to delivery and deployments. Continuous deployment is where most organizations want to be, but continuous delivery is a great first step. Continuous delivery lets you start with simple promotion automation that still must be triggered manually and then code is sent to production. Once you get the bugs fixed, you can add full automation to the deployment process.

Monitoring is a Key to Success

Let’s say that you’ve already set up your automation procedures and CI/CD tools are configured to work with your codebase. Even though the process works smoothly now doesn’t mean you will never have issues in the future. Operations could make changes to infrastructure configurations. Servers could move to new locations on the network, or a temporary network outage could interfere with deployments. Numerous issues can cause your automation to throw an error or simply stop working. The way to avoid a sudden unforeseen stop in the automation process is to ensure good monitoring tools are implemented.

Your DevOps team should choose the right monitoring tools that will work directly with your CI/CD tools. Monitoring tools can be much more expensive than the CI/CD open-source tools, but they are necessary to avoid a serious outage. New Relic, Dynatrace and AppD are a few monitoring tools that you can consider.

You’ll notice in many of these tools that analytics span more than just code deployment automation. DevOps can monitor the production application performance and any bugs detected during execution. This gives the development team the ability to remediate bugs before they affect user experiences or leave the application vulnerable to exploits. Monitoring applications help developers be proactive rather than rushing to fix issues causing downtime on critical applications.

The downside to excessive monitoring is the amount of log files generated. You’ll need extra storage space to handle the increased log files and space needed to perform analytics. Logs also accumulate fast if you choose to work with containers and microservices. Each service will have its own logs and analytics so that you can review performance. You could potentially have millions of log entries to review and use in your analytics.

Another common issue with large applications integrated with analytics tools is the hit on performance. Always test these analytics applications thoroughly to ensure that performance isn’t harmed. With millions of log entries and analytics tools executing in the background, your critical applications could suffer from performance degradation. Analytics software should be tested in staging, which is a mirror image of the production environment.

A Rundown of the CI/CD Pipeline

When you design a DevOps team, you have several tools to choose from but the overall modern pipeline has the same steps. The following steps are what you should expect to design and implement into your development lifecycle with a DevOps team:

  • Automatically build code from a version control system where developers commit changes.
  • Provision new resources or change configurations used to execute code.
  • Move code to a testing environment. This could be a development or staging environment.
  • Add or change environment variables.
  • Push code to servers.
  • Execute finalization steps such as reboots or restarts of services.
  • Test to ensure services are running as intended and roll back if necessary.
  • Review logs if necessary.
]]>
Optimizing Cloud Applications for Healthcare Providers https://www.kandasoft.com/optimizing-cloud-applications-for-healthcare/ Tue, 20 Oct 2020 18:59:44 +0000 https://www.kandasoft.com/?p=35282 The cloud offers any business the opportunity to access advanced technology and leverage it to improve reliability and performance of applications, but to migrate from on-premise infrastructure to the cloud is a huge undertaking and requires the right knowledge. For healthcare administrators, it can be a challenge to understand what should be sent to the cloud, what should stay on-premise, and the compliance requirements surrounding cloud services and data security.

Risks Associated with the Cloud and Healthcare Data

As you research the benefits of cloud infrastructure, you’ll see that there are several risks, but these risks mainly stem from improperly configured security controls. The cloud itself is secure, but the administrators who manage cloud infrastructure often don’t understand the consequences of specific configurations. Administrators also don’t take advantage of efficient logging and monitoring most providers offer to ensure that compromises are detected and mitigated early.

In a recent DivvyCloud report, 33 billion records were exposed in 2018 and 2019 due to cloud security misconfigurations. More concerning is that the report highlights 99% of public cloud misconfigurations go unreported. A 2020 report from Bitdefender showed that security misconfigurations are a top concern for CISOs, and endpoint misconfigurations are one of the most common categories for data breaches. Reports of misconfigurations and data breaches cause apprehension among healthcare providers who support and store highly sensitive patient data and HIPAA violation fines are hefty should this data be disclosed to attackers.

Benefits of the Cloud for Healthcare Providers

The security complications of misconfigurations can be mitigated with the right support, and putting them aside the cloud has several benefits for healthcare providers. With the right planning and migration support, healthcare providers can have the best of both worlds — configurations that secure data and infrastructure that supports reliability, performance, scalability and availability. This doesn’t mean every application will fit in the cloud, but a majority of infrastructure can be offloaded to a cloud provider, which saves money and time for the healthcare organization.

Despite the security risks, cloud computing has transformed healthcare infrastructure. Healthcare is one industry that’s slow to transform to the cloud, and it’s caused performance and scalability issues. Resource sharing and data analysis enhance public health and diagnosis, and the cloud has provided ways to further advancements in medicine. For example, an organization that keeps all data on-premise isn’t leveraging the ability to use a diagnosis or patient symptoms to generate potential second or third analysis based on shared cloud application features.

Scalability is also a major issue for healthcare providers. Even a small doctor’s office can see an Electronic Health Records (EHR) application data storage increase exponentially. This means that storage capacity must increase. Storage costs are expensive, because the organization needs capacity to store data and then the capacity to perform backups. With the cloud, the healthcare organization can scale storage as needed including the space needed to back up data.

Finally, the cloud offers reliability, performance and availability to all organizations, but healthcare can leverage this benefit more than any others. Healthcare and public health needs never stop no matter what time of the day. Healthcare organizations that use cloud infrastructure can ensure that applications run 24/7 and that performance is at its highest even during peak business hours.

How the Cloud Can Transform Healthcare Applications

Every digital transformation needs a plan, but to create a plan the organization needs to lay out goals and identify the cloud benefits that will be leveraged. Your goals could be simply to increase performance of web applications. It could be to reduce costs on expensive infrastructure, or it could be to leverage everything the public cloud has to offer. Here are just a few use cases for cloud transformation in healthcare:

  • Run applications with better performance: EHR applications run faster, pharmacies can leverage faster script and medication management, and physicians can get help with diagnosis and analysis using technology that’s normally expensive to run in a small office.
  • Billing and payments: Several billing and payment applications are available in the cloud to ease the transition of basic cash and insurance payments to a complete system that will allow the physician to take credit card payments and automate insurance claims.
  • Logging and monitoring: Cybersecurity systems are expensive, but every major cloud provider offers logging and monitoring across infrastructure, which is also a HIPAA requirement for compliance.
    Collaboration: More easily share data across multiple systems. For instance, patient records can be shared with billing systems in the cloud.
  • Diagnosis analysis: Artificial intelligence and machine learning are also transforming healthcare. Doctors and hospitals can make more informed decisions based on intelligent analysis and treat patients faster.
  • Lower costs: Technology is expensive, but the cloud offers access to complex technology at a fraction of the cost to house it on-premise.
  • Improved security: Although the main issue with cloud resources is security, misconfigurations can be avoided by working with the right provider who will help organizations transfer data from on-premise systems to the cloud. The cloud itself is secure, but the wrong configurations can lead to major data disclosure. Kanda Software can help any healthcare provider transform their on-premise infrastructure to the cloud and secure it from attackers.

Legacy Applications Run in the Cloud Too

Healthcare is an industry that’s been around for decades, so legacy applications usually exist in the environment, especially hospitals. It’s not uncommon to think that cloud computing is only for new applications, but big cloud providers have services that run legacy applications. For instance, Google Cloud Platform offers APIs and serverless technology that runs legacy applications. Modernization Platform As a Service (ModPaaS) will run applications in COBOL, PL/I, Assembler, JCL and many more languages.

Migrating to the cloud to optimize legacy healthcare applications also requires a strategy in the same way modern applications require a migration plan. The plan that you choose determines if you will run some of the application in-house and the other part in the cloud. Data must be synchronized between the two platforms, but this can be done safely to secure data and keep both platforms available to customers.

By migrating legacy applications to the cloud, healthcare providers can retire older hardware and save money on costs. Migrating entirely to the cloud is considered rehosting, and there can often be some refactoring and reconfiguring necessary for the application to run. Kanda Software helps customers create a plan and migrate legacy healthcare applications to the cloud using a strategy to carefully test the environment first, and then perform a cutover only when stakeholders confirm that data and applications run as intended.

Conclusion

The healthcare industry is notoriously slow for migrating applications to the cloud. Compliance issues, security, and legacy applications are a few reasons healthcare providers fear moving to the cloud. If done correctly, migration to the cloud can be beneficial for hospitals, doctors offices, insurance companies, labs, and any other business that works with patient data. Your applications perform better, you have access to advanced security controls, and your system will scale as the business grows.

]]>
Canary Deployments with Kubernetes and Containers https://www.kandasoft.com/canary-deployments-with-kubernetes-and-containers/ Wed, 14 Oct 2020 19:04:15 +0000 https://www.kandasoft.com/?p=35155 Continuous deployment is a main goal for many large developer teams that need more rapid code deployments of new features to an application. Container technology facilitates rapid code deployment and offers advantages that traditional production environments can’t offer. Not only can developers rapidly deploy code, but they can also test code in a production environment with little downtime (e.g., fewer reboot and process restart interruptions). Canary deployments implemented with continuous integration tools such as Kubernetes can ensure that applications are always available even during scheduled updates.

Canary Deployments vs. Blue/Green Deployments

The term “canary deployment” is a newer term, but its methodology is traditional. Traditionally, developers deploy applications to production and then log any issues to ensure quick bug fixes. Of course, promoting bugless code is the goal of every developer, but mistakes happen even after testing where an unforeseen bug is introduced into production.

In a traditional deployment, new code is deployed to all production machines and the assumption (or hope) is that no bugs crash the system, but bugs are common especially in a large codebase. These bugs could be low priority and don’t interfere with user experiences, or they could be severe revenue-impacting incidents. Any interruptions of service could result in a full rollback of the updated code as all production servers contain the newer version.

With canary deployments, developers deploy to a small subset of production machines, which only serves a subset of users who connect to the application. It’s not full A/B testing but the activity is similar. Only a small portion of your users get the new version of software, so developers can test a variety of features and user experiences while still having the old software version on other production environments.

Canary releases have several benefits:

  • Identify user experience patterns and get feedback from user actions
  • Test resources and performance of the application
  • Capture metrics on the new software version to identify where changes should be made either on the environment or within the application code

Canary deployments are an incremental way to deliver software to users. It’s useful if you have several features that you want to test live with users. Let’s say that you have an application with one layout designed by UX staff. They aren’t sure if the new release will be better or worse for sales. By deploying to only a portion of container nodes, some users will get the old UX layout and others will get the newly updated one. Both applications run simultaneously and load balancers will direct a small segment of your users to the new version. Log user patterns and any errors and you can identify if users are more likely to buy products with the new UX layout or perhaps the new UX needs revisions to better increase revenue.

Canary deployments give you a few other benefits:

  • Testing can be done in production. That’s right – in production. Only a portion of your containers have the new code, so not every customer will receive an error if there are errors, and you can roll back only these containers should a critical crash occur.
  • Traffic sent to the canary release can be defined to limit risk of user frustrations and errors.
  • A small amount of resources is used to run the test environment.
  • Since multiple containers can be used, several different new features can be tested at the same time in production.

While there are several benefits to canary deployments, you should be aware of the risks as well. These include:

  • Changing deployment procedures could lead to inefficiencies.
  • Customized automation scripts might be needed to deal with feature releases.
  • Tracking releases can be a pain if you have several versions of your software running in parallel.
  • As several features are being tested at the same time, it could put a lot of overhead on developers who need to fix issues and bugs for quick release to production.
  • Sticky sessions are essential as a client could hit a production server with one request and your testing environment in another.

How Kubernetes Can Help

Kubernetes is an orchestration tool used for continuous integration (CI) and delivery in a container environment. If you use a traditional environment and use continuous integration, you may be familiar with CI tools such as Jenkins CI or Travis CI. These tools automate testing and delivery of new code in a monolithic coding environment. Kubernetes is a CI tool for a container environment. If you decide to switch to containers running on virtual machines or even physical servers, Kubernetes can help reduce overhead and eliminate some of the disadvantages of traditional deployments.

You still need an official testing environment before sending your code to production. Although canary deployments let you test experiences and features in production, you still must test for bugs before pushing your code to production. Kubernetes can perform CI for you by promoting code to a testing and staging environment as well before deploying to production.

With Kubernetes, you can create “phases” by building a canary phase for your production test environment in the Kubernetes Service setup. With this strategy, you build a new controller for each version of your microservice and then deploy and provision containers for each deployment. Kubernetes tracks and lets you activate multiple versions of your software and control the resources used for each version. This strategy eliminates several of the disadvantages of canary deployments so that developers can stay more organized and reduce resource waste.

Kubernetes also provides a rollback strategy should any critical issue crash the program. With traditional code promotions, any environment variables and configurations must also be rolled back. This can be a stressful tedious issue for the developer in charge of fixing a failure using rollbacks. With Kubernetes, you can automate rollbacks so that failures don’t persist for long on any given container.

Conclusion

Canary deployments are a great way to incrementally promote code to test features in an A/B testing manner. Although you’re testing in production, you still can’t afford buggy code that could crash the application. Using Kubernetes, you can configure the orchestration tool to provision resources and deploy canary versions of your code in an efficient manner. Should you need a rollback, Kubernetes can perform a safe rollback as well. If you have software with several features and you want to test user satisfaction and experience, canary deployments, containers, and Kubernetes for automation is the strategy you’re looking for.

]]>
What is the Point of DevOps? https://www.kandasoft.com/what-is-the-point-of-devops/ Mon, 28 Sep 2020 20:20:12 +0000 https://www.kandasoft.com/?p=34804 Traditionally, operations and development teams were distinct departmental silos within the enterprise, but the need for automation and more rapid code deployment has triggered the development of a combined DevOps culture. For businesses that don’t have a DevOps team, it’s tough to determine if a new team is necessary and the way the team should be created and managed. A DevOps team is mainly a group of developers with a good understanding of server and network management who work closely with operations to automate code testing and deployment across on-premise infrastructure and the cloud.

When Do You Know It’s Time to Build a DevOps Team?

Bringing the development side to operations is a cultural change for everyone involved. Most operations people are not familiar with Agile, which often becomes a huge part of the way DevOps works. The software development life cycle is unfamiliar to many operations teams as well.

In a typical operations and development environment, although both teams are separated, they still work closely together during code promotions and bug fixes. For instance, project managers for software development might work closely with operations to deploy updates to internal tools hosted on internal servers. With DevOps, this process can be automated using continuous integration and continuous delivery (CI/CD) tools so that testing and deployment does not need as much human oversight and interaction.

Some factors that indicate that you could benefit from DevOps:

  • Deployments require both operations and development personnel to come together, and schedules force the delay of code promotion to production.
  • Production deployments take months to complete between waiting for development, scheduling deployment days, and bug fixes that force updates to be rolled back.
  • Cloud deployments require changes in infrastructure configurations as well as updates to the production codebase, and they are usually done manually.
  • Resources — both on-premise and in the cloud — are mismanaged and improperly configured leaving errors after deployment that must be remediated.
  • Better version control is needed across deployments.
  • Human errors cause downtime and software bugs, which require rollbacks or immediate developer attention for remediation.
  • Operations and development work independently and their own procedures do not mesh well and cause interruptions and delays in a streamlined deployment process.

Benefits of Building a DevOps Team

Any major change in corporate culture requires staff effort and the money to set up the system. You can hire new staff or make changes in the current environment to combine the two departments. If you choose to combine operations and development, the change requires team building and education across the organization so that both teams understand the new way of working.

Agile is commonly used in development, but operations people are usually unfamiliar with Agile methodologies. Operations people must change the way they handle changes to infrastructure by adding Agile to the mix, which could cause some resistance from personnel who already have their own way to handle changes.

Even with the upfront investment both monetarily and with staff time, corporations benefit from the numerous advantages DevOps provides both technically and culturally. As more organizations adopt DevOps as a legitimate IT need, they continue to improve software lifecycle management and save money on failed promotions to production.

Research from Puppet found that DevOps had several tangible benefits:

  • Organizations were able to perform 200 times more deployments
  • Recovery from issues was 24 times faster.
  • Failure rates were 3 times lower.

A survey of 31,000 professionals from Accelerate indicated that:

  • Commit to deploy was 106 times faster with DevOps.
  • Development teams were able to deploy code 208 times more frequently.
  • Recovery from critical incidents was 2604 times faster.
  • Failure rate from code promotions was 7 times lower.

From the above research, the organizations that were able to leverage DevOps benefits the most effectively employed several factors. They already had a clear change control process and maintained their own codebase. Development and IT employed continuous integration and delivery automation, and they incorporated automated testing to speed up QA. Monitoring was heavily done to identify issues within the automation procedures and identify any errors from the codebase. Finally, they also included disaster recovery testing and incorporated cloud services in some of the automation and deployment procedures. Take these aspects into consideration as they could considerably help in your own migration to a DevOps environment.

What Type of Environment Does DevOps Have?

Whether the organization builds a DevOps team from new hires or builds a team from current operations personnel and developers, the new team will have their own environment, tools, and procedures. The type of environment and the tools added to it depend on the nature of the job. Some DevOps teams simply perform automation for operations, so only scripts deployed to internal servers are coded. Other DevOps teams work with public-facing software and servers that require versioning, change control, and other standard development procedures.

One aspect that cannot be avoided is the quality assurance (QA) and testing process. DevOps has many moving parts, but the process is cyclical. The main component added to development and operations is automation when a DevOps team is created, so the environment is mainly all automation concepts, procedures, tools, and solutions. Automation will save code, version it, measure it, and evolve from lessons learned.

An environment for effective DevOps will have the following components:

  • Processes: This component is dependent on the business, requirements, and personality preferences.
  • Communication: Both developers and operations people need a way to communicate other than email. Microsoft Teams and Slack are often used in the enterprise.
  • Development tools: Every developer has their preferred GUI for development, so check with your team to find out which one they want to install.
  • Continuous integration tools: These automation tools take checked in code, build it, and move it to a testing environment.
  • Continuous testing tools: Testing automation builds the code, tests it, uses benchmarks for performance analysis, and creates reports for review. You may still have human QA reviewers to identify issues with user experience and interface components.
  • Continuous deployment tools: Orchestration from testing to production speeds up code promotion especially in a large organization that has several developers contributing to the codebase.
  • Cloud tools: For organizations that work with cloud infrastructure, every provider has their own tools to help automate code promotions, configurations, and logging. Most provider tools do not work on other platforms, so the tools you use will depend on the provider that you choose.

Picking the Right Tools

With a general idea of the DevOps environment defined, you need to find the right tools for your developers and operations people. DevOps tools generally fall into one of four categories:

Version control:

Every deployment that updates the currently running application should increment the registered version number. By numbering versions, developers know what changes were made in case they need to roll back or fix bugs. Version control can be handled by change management software. Examples of these tools include:

Build and deploy:

Developers that check in buggy code introduce issues into the codebase. With build and deploy tools, the code is built to identify any syntax errors first and then deployed to a testing environment automatically provided the codebase builds with no errors. Examples of these tools include:

Functional and non-functional testing:

Software can be vulnerable to a variety of issues including logic and syntax errors, cybersecurity exploits, and performance problems. Testing tools will identify any issues and report them to developers and team leads. Examples of these tools include:

Provisioning and change management:

Occasionally, software deployments require infrastructure changes such as permissions, platform configurations, monitoring, and data. These tools provision changes during deployment to ensure a smooth transition. Examples of these tools include:

Conclusion

If your organization needs faster less buggy code promotions that involve operations, the point of DevOps is to remediate many of these issues. The change in culture and procedures takes upfront commitment, time to set up the new team, infrastructure changes, documentation, and Agile implementation. All of these steps lead to a much more streamlined rapid code-to-production process and a far more stable IT environment.

]]>
Best Practices for Migrating Data to the Cloud https://www.kandasoft.com/best-practices-for-migrating-data-to-the-cloud/ Tue, 08 Sep 2020 15:35:45 +0000 https://www.kandasoft.com/?p=34463 The benefits of having organizations move data to the cloud are well known. By moving to the cloud, companies can improve performance, create a flexible environment for storage, scale, and realize cost savings.

However, you can’t just press a button and migrate your data in the cloud. Adopting the cloud means first having a strategy for your business that includes detailing how you will use the cloud. Only then can you start planning how you will get your data off on-premise storage and into the cloud.

That’s why it’s crucial to work with a cloud consulting service with expertise in performing a data migration in the cloud. This article will review the best practices for migrating data. Proper strategy, tools, experience with the migration process are some of the critical components.

The cloud ecosystem can be a great thing for businesses. But it’s also tough to navigate, especially when you’re trying an end-to-end migration without the right experience to pull it off.

Here are some thoughts on how to make the entire transition much more manageable when migrating your data to the cloud.

Create a Cloud Data Migration Strategy

The first thing your organization must do is create a cloud data migration strategy. That process starts by creating a plan that will demonstrate how you’ll be using the cloud. You’ll want to identify all your stakeholders, develop a roadmap of processes to follow, and consider everything that needs to go into the migration.

If you fail to do this, you’re putting your entire project at risk before you start. You can run into problems before, during or after the migration, including having people on your team fail to adapt. Your plan will reinforce the importance of the migration and what the expectations are for your organization.

There are many strategies your organization can use. At Kanda Software, we prefer our custom “Move and Improve” strategy. We can accelerate time to value and minimize your risk because the strategy is tailored to your unique business issues. We also factor in the long and short-term strategic goals and objectives you have.

A proper analysis includes reviewing crucial issues like:

  • Business objectives and constraints
  • Application cloud fit
  • Migration readiness
  • Resource constraints
  • Application constraints
  • Cost-benefit analysis by application
  • Vendor and ecosystem selection

In our analysis phase, we’ll review the application architecture, dependencies and impacts of the constraints. That allows us to create a detailed recommendation plan for workloads and applications you can migrate and others that need to be re-engineered.

Beware of Data Migration Challenges

The data migration process incorporates everything we do in the planning and analysis phase. You’ll know exactly what data is a cloud migration candidate as is, and what needs to be re-engineered to be ready.

Further, you’ll have your roadmap and migration plan, and the migration schedule will be ready. Your cloud design will be completed, whether you are are using a private cloud, shared cloud, public cloud or another cloud service.

All of the planning is important because there are several data migration challenges and risks. Those risks include data loss. A small or sizeable catastrophic loss is not unheard of if you are working with inexperienced vendors. Sometimes the data can go missing, and no one will know until a user or app needs it.

Compatibility issues are another risk. Your organization can experience problems with the data transfer with operating systems, file formats, user success rights between the source and target systems, and more.

Other risks include downtown, cost overruns, missed deadlines, and poor performance. These types of issues can occur when organizations use inexperienced vendors or try migrating in-house.

The Data Migration Process is Critical to Success

There are ways to avoid challenges, and that’s by using a data migration process that has worked repeatedly.

For example, when you work with us, we work with your end-users. They help us understand data rules, definitions, compliance issues, and what your data priorities are.

You’ll need to audit source databases before any migrations. While storage migration is not as difficult, you’ll still have to review your data to ensure a seamless migration. There’s also obsolete files, old email accounts you never use anymore, and outdated user accounts that need to be deleted.

Backing up data is critical, so you don’t lose it. Finally, after you’ve done your migrations, you can administer some final tests to make sure everything is correct. Then you can shut down your legacy systems and start utilizing your new cloud storage system.

Data Migration Tools are Important

During the deployment phase, you’ll need applications and software to ensure a successful migration.

The apps and software help guarantee security as you move precious data to the cloud and make the process less costly.

Our clients use customized services tailored to their migration. Those services include SQL or no-SQL databases, application monitoring, transactional storage, archival storage and analytical apps and software. The software apps are used on dark data, protect your databases and sync the source and target databases, so they’re operating in real-time.

Data Warehouse Migrations can be Tricky

One type of migration can be more complicated than others. When you are migrating your data warehouse to the cloud, the process may require more time and resources than other migrations.

Strategy and planning are critical when moving that much data. You’ll need to work with an experienced vendor who can:

  • Minimize your migration risks
  • Handle security issues
  • Help you understand the costs involved
  • Answer questions on performance

Data warehouse migration can take significantly longer, depending on how much data you need to move. That’s why you must work with an experienced team. We can give our clients accurate estimates, which helps them avoid unnecessary downtime.

Final Thoughts

Migrations can be stressful, but once you finish, you can move into the phase where you worry about optimization, automation and management.

It’s at this point where you can continue to reduce your costs. We can make it cost-efficient to move to private, hybrid or public environments like AWS, GCP and Azure.

So let us navigate the assess your organization’s readiness and perform the complex migration for you. That’s how you ultimately maximize your ROI on your cloud migration investment.

]]>
Cloud Storage Best Practices for Enterprise Development https://www.kandasoft.com/cloud-storage-best-practices-for-enterprise-development/ Wed, 26 Aug 2020 02:35:05 +0000 https://www.kandasoft.com/?p=34159 Whether you’re looking for cost savings or higher drive capacity, cloud storage has solutions for any enterprise. Even with its advantages, cloud storage requires a different approach compared to configuring internal network storage. Misconfigurations could leave your organization data open to attackers, and unorganized management could lead to unnecessary costs. Following these best practices eliminates most of the pitfalls an organization can experience integrating cloud storage into applications, infrastructure, and failover strategies.

Why Use Cloud Storage?

Before you create strategies around storage, the first step is to determine the use case specific to your organization. Every organization has their own goals for storage, but some common reasons for cloud storage implementation include:

  • Software as a Service (SaaS): Software that runs in the cloud can more efficiently and conveniently store and retrieve data from cloud storage to support global customers.
  • Large data collection for analysis: Big data analytics relies on large amounts of collected unstructured data that could reach petabytes of storage capacity. Cloud storage saves on costs and scales automatically as needed.
  • SD-WAN infrastructure: Companies with several geolocations can speed up performance on cloud applications using a software-defined wide-area network (SD-WAN) that implements cloud storage integrated into the infrastructure.
  • Scaled local storage: Adding network drives to internal storage is expensive. Cloud storage can be added within minutes to internal infrastructure and only costs a fraction of the price of local drives.
  • New development: Developers can take advantage of cloud storage without provisioning expensive infrastructure locally. When a project is ready to be deployed, it’s easy to deploy on existing cloud infrastructure.
  • Virtual desktops (VDI): VDI environments must scale as users are added to the network. Cloud storage lets businesses scale their user environments using VDI.
  • Email storage and archiving: Email communications must be stored for compliance and auditing. Accumulated messages and attachments require extensive storage space that cloud storage can manage.
  • Disaster recovery: Cloud storage can be used for backups or failover should local storage fail. Using cloud storage for disaster recovery can significantly reduce downtime.
  • Backups: Cloud storage is probably best used for backups. Combined with disaster recovery, cloud backups offer businesses a way to have complete backups stored off-site in case the office suffers from a natural disaster where internal hardware is destroyed.

Strategies and Best Practices

After your use case is determined, strategies for your cloud storage configurations can be analyzed. Many of the strategies and best practices revolve around cybersecurity and configurations, but others determine the way you should manage your cloud storage and organize files. Not every strategy is necessary, but the following best practices will help administrators get started provisioning, configuring, and managing cloud storage.

Consider at least two providers

If your goals are to store application data, it might be worth investing in at least two providers. Most cloud providers offer extensive uptime guarantees but having one cloud provider leaves the corporation open to a single point of failure. In 2017, a human error caused an outage on Amazon Web Services (AWS) storage in their US-EAST-1 region. It’s rare for AWS to fail, but it is a possibility. If your application relies on only one provider, it could mean downtime for the application until the cloud provider can recover.

A second provider can also be configured as a failover resource. Should the primary cloud provider fail, a secondary service provider can take over. For instance, Microsoft Azure or Google Cloud Platform (GCP) could be used as failover for AWS. Note that this would add considerable costs to the enterprise, but it also could save thousands of dollars due to cloud provider downtime.

Review Compliance Regulations

Several compliance regulations require cybersecurity and standards in place for the way organizations manage customer data. Any personally identifiable information (PII) must be stored in encrypted form and monitored and audited for unauthorized access. The European Union’s General Data Protection Regulation (GDPR) requires that businesses allow customers to request deletion of their data. PCI-DSS oversees merchant accounts and financial transactions. Review any regulatory standards that could fine the business for poor cloud storage management.

When choosing a cloud provider, check that they are Service Organization Controls (SOC) 3 compliant. SOC 3 cloud providers must offer transparency reports to the public on the way security and infrastructure are managed. The provider’s data centers should also be Tier 3. Tier 3 data centers provide a 99.982% uptime guarantee, which is only 1.6 hours of downtime per year.

Keep Strict Access Control Policies

Even large, well-known organizations make the mistake of leaving cloud storage publicly accessible, which leads to large data breaches. You don’t need to be a hacker to find open access AWS buckets. Online scanning tools let anyone find open publicly available data. Ensure that the cloud storage isn’t open to the public, but this configuration isn’t the only access control policy needed.

Folders and files stored in the cloud should have the same strict access controls as your internal data. Cloud providers offer account access and management tools, and many of them integrate with internal services such as Active Directory. Use permissions based on the least privilege standard, which says that users should only have access to files necessary to perform their job functions. This standard helps reduce the chance for privilege escalation and stops attackers from traversing the network freely on a high-privilege account.

Use Cryptographically Strong Algorithms for Encryption

Whether it’s your own standards or for compliance, always use encryption for data storage on sensitive information and PII. Weak cryptographic libraries leave the data open to dictionary brute-force attacks, so it’s just as important to use the right algorithms.

Encryption adds some performance overhead, so take performance into consideration. Advanced Encryption Standard (AES) 128-bits is a cryptographically secure symmetric algorithm often used in data storage. AES 256-bits is also available for a higher level of encryption protection, but it suffers from performance degradation. For password storage and one-way hashing, the Secure Hash Algorithm (SHA) 3 standard is available.

Organize Data and Archive Unused Files

Organization of folders and files will help administrators determine if they should be backed up, if the folders contain sensitive information, and if they can be archived. Archived data is moved and deleted from its original storage location so that an audit can be done should the organization need to review it in the future. Archives can be compressed when stored, so archiving unused data is useful in cost savings.

It’s also beneficial in determining access controls across large folder trees. Organized folders make every aspect of storage management easier for administrators, so a policy on the way folders should be set up will improve cost savings, backup strategies, and archive management.

Set Up a Retention Policy

Retention policies are common for backups, but cloud providers also offer retention policies in case users accidentally delete data. Instead of permanently deleting data, a retention policy on cloud storage will hold it for retrieval and recovery for a set amount of time before permanently deleting it. This strategy saves administrators time so that they do not need to recover data from backup files.

Use the Access-Control-Allow-Origin Header for Strict Access Controls from Web Requests

Cross-Origin Resource Sharing (CORS) is a security standard that restricts access to external resources. If your application reads data from cloud storage, you must allow access to it using the Access-Control-Allow-Origin header. Some developers use the asterisk (‘*’) in the Access-Control-Allow-Origin header value, which tells the cloud storage bucket to allow any application to read from it. This permissive misconfiguration leaves bucket data open to any attacker-controlled site.

For example, it’s not uncommon for developers to use the XMLHttpRequest object to retrieve external data in JavaScript. When making the request, the browser does a preflight request to determine if the application has permission. If the domain is included in the Access-Control-Allow-Origin header, the request continues. Otherwise, the browser’s CORS restrictions reject the request.

To use a domain example, suppose your domain named yourdomain.com makes a request to an AWS bucket. Your AWS bucket should be configured to allow only yourdomain.com applications to retrieve data. AWS, GCP and Azure have these controls available to developers. The following Access-Control-Allow-Origin header would be the proper way to allow your application and disallow any others:

Access-Control-Allow-Origin: https://yourdomain.com

Should an attacker send a phishing message to users and attempt to launch a Cross-Site Request Forgery (CSRF) attack, the attacker’s application call would be blocked due to the above header configuration.

Configure Monitoring Across All Storage

Monitoring is not only a part of compliance requirements, but it will keep administrators informed on file access activity. Every major cloud provider offers monitoring controls, and they can be beneficial when attackers compromise infrastructure. It can reduce damage from an ongoing attack, or it can stop an attacker from continuing vulnerability scans looking for exploit opportunities.

Organizations can use monitoring tools for more than just cybersecurity. Monitoring can tell administrators if data was accidentally deleted, help identify a failure, audit file access, and determine current storage capacity and if it needs to be increased.

Conclusion

Cloud storage has several benefits for organizations, but the way it’s managed and configured plays a big role in its successful implementation. It saves on IT costs, but it also can cost organizations millions of dollars should the infrastructure be misconfigured. Before implementing cloud storage in your software deployment or backup strategy, take the time to prepare access policies, organization standards, and a monitoring setup.

]]>