Browsed by
Category: Technology

Why Technology Is Set to Play a Larger Role in Life Sciences

Why Technology Is Set to Play a Larger Role in Life Sciences

Historically the life sciences industry has considered information technology to be a way of completing basic functions, rather than as an intrinsic or vital part of their work.

Over time technology is reaching into the life sciences world as patients become more engaged in their care and data collection and analytics become ever more useful.

Change In the Health Care Sector

The healthcare sector is swiftly being challenged and changed by technological advances. Healthcare providers are increasingly looking to offer new and different methods to provide healthcare to their users and healthcare users are expecting a more personalized service.

Current healthcare users expect their care to be clear to them at every step. They expect transparency, a smooth service, convenience and to be included in decision making at every stage in their healthcare journey.

Data analysis and service automation can lead to an increase in productivity for a life sciences company as well as allowing a more personalized service. With the addition of digital technologies, outcomes can be improved, and patients receive a more engaging experience. If a life sciences company wants to ensure that it is a leader in the healthcare industry, it must consider the service its users expect.

Digital Markets

Digital markets give access to vital data that can be collected, analysed, and used to create solutions for both life sciences companies and their users. Having technological solutions that link with the life sciences will allow the life sciences sector to develop clear strategies and solutions within their service.

Any life science company that wishes to be a leader in its sector would do well to fully embrace digital markets and all the advantages they can bring.

New Platforms

Modern technologies and new ways of looking at technology within the life science sector can allow a company to look at the basic technology they use and decide what needs to change to fulfil its true potential.

To future-proof themselves life sciences companies need to embrace the latest cloud-based technology and consider the newest platforms available. This will boost patient satisfaction while also streamlining operations for staff.

Read more here

 

Opportunity Soars as Drone Delivery Services Take Flight

Opportunity Soars as Drone Delivery Services Take Flight

In the age of automation and same-day delivery, the race for companies to develop widespread, reliable package dropoff via drones has only just begun. With years of development tucked into the design and testing grounds, drone services are finally being implemented and utilized by the public. However, these tryouts have been met with varying degrees of success.

Reporter Timothy B. Lee in a piece for arstechnica.com states, “Earlier this month, Google’s sister company, Wing, began offering a drone delivery service in the Dallas suburbs of Frisco and Little Elm. Wing drones take off from ‘nests’ in two Walgreens parking lots to deliver things like health products or ice cream to nearby customers. Wing describes it as ‘the first ever commercial drone delivery service in a major US metropolitan area.’”

So far, Wing’s drone delivery service has been an all-around victory. With successful safety measures, impressive speed both in flight and trade-off, and a high demand for the service, it’s no wonder that the company recently crossed their 200,000 delivery mark.

“Wing has obtained [a Federal Aviation Administration (FFA)] waiver allowing it to fly beyond the visual line of sight. That allows Wing to offer deliveries as far as four miles away from a drone’s home base. Wing’s waiver also allows flights over people. This allows drones to pick up a new package by hovering about 23 feet (seven meters) in the air and extending a tether down to the ground.”

From order to drop off, interaction with delivery workers or the drone itself is never necessary on the patron’s part, meaning service is quick, easy, and pandemic-approved.

Walmart’s “DroneUp” has seen similar success in its startup. Though the drones are slower and heavier than the ones from Wing, and the company has a harder time obtaining waivers from the FFA, their safety and satisfaction rate has been stellar.

“FAA regulations prohibit a drone from flying over people or moving vehicles. [Tom Walker, founder of DroneUp] says that DroneUp’s aircraft have ‘the ability to dynamically route around areas where [people and moving vehicles] might be, and also have sensors that let us know where people are on the ground.’ The drones also have multiple redundancies to help ensure that the failure of any one component won’t cause it to crash.”

Fast, smart, and safe, DroneUp is customer-approved and ready to grow their services. Walmart plans to expand the operation from 200 to 600 people this year, with a significant percentage of that new workforce being comprised of drone operators.

Despite the success of Walmart’s DroneUp and Google’s Wing, not all big-brand drone delivery services have had the same smooth sailing with their debuts. Amazon has yet to create an approved drone, much less implement a widespread system. Heavy and unsafe, the online retail giant’s drones have not been able to meet FFA standards.

“An article in Wired last year described the turmoil in the UK branch of Amazon’s drone project. Amazon started testing delivery drones in the UK in 2016, but by 2021 people were telling Wired that the program was “‘“collapsing inwards,” “dysfunctional,” and resembled “organised chaos” run by managers that were “detached from reality,”’” recounts Lee.

Even if a widespread rollout of delivery drones hasn’t become reality quite yet, there is no doubt it will be soon with our rapidly evolving technology. And when those billion dollar retail giants call for skilled experts to design, built, test, and operate their drones, will you be ready?

Capitol Technology University offers many opportunities in unmanned systems, where you can learn indispensable, industry-ready knowledge and prepare to answer the call for skilled aviation experts.

To learn more about these programs as well as our wide breadth of other STEM fields of study, visit captechu.edu and peruse the various courses and degrees offered. Many courses are available both on campus and online. For more information, contact admissions@captechu.edu.

Quantum-Charged Electric Cars

Quantum-Charged Electric Cars

Amidst the race for clean energy output, electric vehicles (EVs) are taking a favorable spin around the globe. As a greener option compared to diesel and gas powered engines, a new generation has popped up, preferring to drive solely on electrically charged engines.

However, the day to day maintenance of such an engine can be cumbersome. According to reporter Martin M. Barillas in a newsweek.com article, cars “may take as long as 10 hours to fully charge at home, while even superchargers at charging stations take 30–40 minutes to provide a full charge.”

With such a long wait paired against minimal driving time, the curve towards drivers who prefer electric vehicles remains stunted. However, brand new battery technology may soon put EVs back on the leaderboard as keys to a green future.

“Scientists at Korea’s Institute for Basic Science (IBS)… [have discovered] new quantum technologies that can quickly charge batteries. They drew inspiration from a 2012 study, which proposed the quantum battery concept and theorized that quantum resources such as entanglement may charge batteries at a vastly faster rate by charging all cells in a battery simultaneously,” the Newsweek article explains.

Currently, electric car batteries cannot be charged all at once, and the need to refill them one at a time is the main source of pacing issues. With the quantum batteries, not only would this issue be erased, but charging would be exponentially faster.

“A team of scientists from the Center for Theoretical Physics of Complex Systems at IBS… found that unlike classical batteries (e.g., lithium-ion batteries), where maximum charging speeds increase according to the number of cells, quantum batteries with global operation may achieve quadratic scaling in charging speed.

“[A]s quantum batteries increase in size, charging times become faster. For example, when going from 1 to 2, instead of increasing by a factor of 2, it increases by a factor of 4, and when going from 1 to 10, it increases by a factor of 100.”

With this supercharged factoring, a normal at-home charge would be reduced from double digit hours to just a few minutes, while wait times at public charging stations would go down to mere seconds. And if such technology succeeds, it could be utilized for purposes outside travel.

“Quantum charging may be used someday in consumer electronics as well as in fusion power plants, which need large bursts of energy for instant charging and discharging. However, the researchers caution that quantum technologies still need years of research before they can be introduced to revolutionize energy use and green technologies.”

It may take some time yet for quantum technology to be thoroughly researched and approved, but if enough dedicated people join the team to make it a reality, these batteries could come sooner than we hope.

Capitol Tech offers many opportunities in engineering, where you can join the pursuit to make electric car batteries sustainable and quick to power.

To learn more about these programs, visit captechu.edu and explore the various fields of study offered. Many courses are available both on campus and online. For more information, contact admissions@captechu.edu.

 

5 Common Cloud Computing and Connectivity Challenges

5 Common Cloud Computing and Connectivity Challenges

According to Gartner, spending for enterprise cloud platforms will increase 14% by 2024. One reason is that, especially for small to medium sized businesses, cloud computing is proving to be 40x more cost effective than having in-house IT systems and data centers. The public cloud (as opposed to a private cloud) is easy to use, cost-efficient, and scalable. Yet migration is not a simple task and success is not a given, especially when organizations encounter unanticipated obstacles.

Pranav Kondala, solutions architect for Hughes, identified some of the most common challenges enterprises experience when it comes to cloud computing and connectivity in a recent discussion. These include:

  1. Lack of knowledge and expertise. Many businesses launch cloud computing initiatives without realizing they do not have the right talent available in-house to support their digital transformation. Additionally, Mr. Kondala notes, “with cloud architecture expertise in high demand, finding talent in this market has challenges of its own.”
  2. Visibility. Moving workloads to the public cloud means losing many of the controls an enterprise once maintained with on-premises solutions. Cloud providers do not grant their customers direct access to shared infrastructure and traditional monitoring infrastructure will not, in many cases, work in the cloud. While cloud providers may provide log files detailing workload activity, without also having access to data packets, analysts or operations teams can’t use these log files to investigate alerts, identify root causes and remediate threats. Lack of packet data will also limit their ability to investigate performance issues in complex cloud environments.
  3. Data protection and privacy. When it comes to securing data in the cloud, it is important to understand subtle differences between on-premises versus cloud security approaches. It is a myth that the cloud is always more secure than on-prem capabilities. Even though the cloud service provider may assure data integrity, it is always the enterprise’s responsibility to understand what the provider is doing to protect against intruders and keep up to date on the latest security fixes, as well as the steps they’ll take in the event of a breach. “Having the right security posture from Day 0 is critical for any cloud migration effort,” stressed Mr. Kondala.
  4. Secured connectivity into the public cloud. Especially for the distributed enterprise with hundreds of locations, secure connectivity to the public cloud is complex and relies on so many different providers – all of which have unique infrastructures, capabilities and costs. It is also time and resource intensive for IT teams to manage multiple providers.
  5. Performance may vary. As with any scenario involving service providers, an enterprise may experience performance variances between or within vendors. Yet when network performance suffers due to cloud connectivity, so does the user experience. Mr. Kondala advises paying careful attention to network issues, such as latency, packet loss, and congestion, which can undermine cloud application performance.

One of the greatest ways to mitigate these and other challenges is to invest in training and certification for the IT team members responsible for managing the cloud strategy and implementation. Another option is to turn to a Managed Network Services Provider (MNSP), like Hughes, that offers deep bench expertise to address these and other challenges and optimize cloud performance once the shift is complete. An MNSP can lead transition efforts or supplement an internal IT team’s activity, providing the resources needed to efficiently grow a business’ capabilities.

“The cloud environment and data center are incredibly different footprints. It is important that your teams are trained to embrace and understand what is required of a realistic cloud strategy. Whether you are just starting off with cloud computing or are in mid-migration, finding the right talent is critical and can save a lot of money and trouble in the long run,” Mr. Kondala said.

Watch the full TechTalk interview between Mr. Kondala and Tim Tang here.

Up to Speed: Issue 1, Hypersonics Primer

Up to Speed: Issue 1, Hypersonics Primer

This is the first installment in a new research series from FON Investment Banking that will provide commentary and analysis on developments in the hypersonic weapons industry. This issue is a primer that sets a baseline for the current state of hypersonic weapon development, with more focused reports to follow on hypersonic topics including missile defense, space-based systems, infrastructure, and adversary programs.

Background
Hypersonic weapons – which travel at speeds in excess of Mach 5 (approximately 1 mile per second, or 6,000 kmh) 1 – have been a part of defense research and development in the United States since the 1960s.2 It was during the George W. Bush administration in the early 2000s that the US began to make more targeted investments to develop hypersonic weapons as part of the Conventional Prompt Global Strike (CPGS) program. 3

Hypersonic weapons are typically classified into two categories – hypersonic glide vehicles (“HGV”) and hypersonic cruise missiles (“HCM”):

  • HGVs are unpowered vehicles that are launched from a rocket and then released to glide to its target. These vehicles are maneuverable once they reach the glide phase, therefore holding large areas at risk during flight. Unlike ballistic missiles which can reach upwards of 1,200+ km in altitude, HGVs reach between 40 km to 100 km – flying at trajectories that create significant challenges for existing land- and space-based detection systems and sensor architecture
  • HCMs can be launched from the ground, from aircraft, or from ships. These missiles have airbreathing engines that can produce thrust to hypersonics speeds, known as supersonic combustion ramjet (scramjet) engines. 5 Similar to HGVs, the HCMs combine speed and maneuverability to make them highly effective weapons as compared to conventional cruise missiles.

Hypersonic weapons can be paired with nuclear or conventional warheads, however given their high rate of speed, conventional hypersonic weapons are expected to use only kinetic energy to destroy targets.6 The combination of maneuverability, speed, and flight trajectory not only make this first generation of hypersonic missiles very difficult to detect and defend against, but they materially compress timelines for decisionmakers to assess and respond to the threat once the weapon is in-flight, creating urgency among major governments to have offensive and defensive capabilities. Read More

How do Carbon Offsets Work?

How do Carbon Offsets Work?

We all have a carbon footprint. Everyday activities like driving your car, using your computer, operating a business, or heating your home generate greenhouse gas emissions that contribute to your carbon footprint which impacts the environment. The best thing we can do to preserve the environment is to take actions to reduce our carbon footprint such as using mass transit and using more efficient lighting and appliances.

However, no matter how much we reduce it’s virtually impossible to avoid all emissions that contribute to our carbon footprint, this is where carbon offsets come in — you can counterbalance or green your unavoidable footprint with CleanSteps®.

Here’s how it works: a business like a landfill or transportation company develops a project that reduces greenhouse gas emissions above and beyond what is already required of them by law. These reductions are measured and verified by independent third parties such at Green-e®.
A carbon offset is then created for every metric ton of carbon dioxide emissions that is reduced from the project. These carbon offsets can be purchased by other individuals and businesses to counterbalance or green their carbon footprint since these projects would not otherwise have been developed. This way, everybody wins.
CleanSteps® is an easy and simple way to reduce your carbon footprint and promote cleaner air and water. The environmental benefit of just one carbon offset compares to taking the average car off the road for about two months or the planting of 16.5 tree saplings that will grow for 10 years.
Market Confusion: Understanding the Options for Telco Services

Market Confusion: Understanding the Options for Telco Services

The telecom industry is a complex matrix of companies and partners providing the hardware, connectivity, services and software required for Unified Communications, including phone, fiber, cable, advanced telephony functionality, and more.

Feel a bit overwhelming?

For corporate decision-makers, a firm grasp of the industry will provide the insight and clarity required to make informed decisions. Knowing who the players are, their roles, and what they actually provide compared to other options goes a long way to knowing you have the best fit for your business.

The competitive layout of telecom is fluid and constantly affected by ongoing changes, acquisitions, and government regulation. Consider this article a snapshot of the current telecommunications landscape.

The ABC’s of The Telecom Industry

While understanding who the providers are for every area of telecom is valuable, we know it can feel like a lot to stay on top of. This is one of the reasons why we work with small and medium-sized businesses, nationwide, to help them determine the best services for their needs.

ILEC PROVIDERS

ILEC – (Incumbent Local Exchange Carriers) telephone companies that held a regional monopoly on providing local service when the Telecommunications Act of 1996 was enacted.

ILEC’s are the legacy phone service providers that are mandated to provide and maintain copper services across the nation. While densely populated areas are moving to fiber optics and wireless services, rural areas still rely on ILECs to provide services over copper wires and T1 lines.

For the most part, any service delivered over copper will use the ILECs maintained network of landlines. ILECs are large regional providers (though the wireless side of their business is nationwide in scope), which means their infrastructure and size can become a weakness. It’s really no surprise that customer service is one of the most common complaints with these behemoth carriers.

However, companies in rural areas, or with offices spread across the nation, no other type of carrier can provide the reach and services those companies may require. Rather than have different providers at each location, many enterprises choose ILEC providers to provide service at all locations. This centralizes their telecom services and makes it easier to manage as a whole.

CLEC PROVIDERS

CLEC – (Competitive Local Exchange Carrier) is a telecommunications provider company (sometimes called a “carrier”) competing with other, already established carriers (usually the ILEC).

After the Telecommunications Act of 1996, a lot of small local carriers began to burst onto the scene. Originally, they provided services, such as Internet or phone service, over copper lines they leased from the ILECs. They could generally provide better pricing to the end customer for a variety of reasons, such as lower infrastructure costs and operational overhead.

Since the rush of the Dot Com Era in the late 90’s, a lot of CLECs have been consolidated into larger companies. These companies are now starting to build their own fiber optic networks (or take over fiber already in the ground), providing services such as cloud-based phone service, fiber Internet connectivity, and data center services, outside of the ILECs copper landline network.

The advancement of fiber optics, wireless and voice-over-IP technology, CLECs are uniquely setup to provide the entire spectrum of telecommunications services to their regions. These companies are much smaller than the ILECs, which usually translates to newer, better-maintained technology and more personalized customer service and support.

Atlantech Online originally started out as an ISP, but has grown into a registered CLEC. We are building our own fiber optic network in the Washington DC area, which allows us to provide direct-connect telecom services to our clients in the region, but we have options for businesses nation-wide. Direct-connect over fiber means fast Internet, crystal-clear cloud-based phone service, as well as data center services with point-to-point fiber connectivity. On top of all that, we are 100% dedicated to providing the best possible customer service and support.

 

INTERNET SERVICE PROVIDERS

ISP – (Internet Service Provider) provides access to the Internet through cable, fiber, wireless or other technology. In addition, some provide other services such as colocation and web hosting.

ISP’s rely on connectivity from ILEC’s & CLEC’s, providing a public connection to the Internet. ISPs are a good fit for smaller businesses and residential internet service.

MULTIPLE SYSTEM OPERATORS

MSO’s primarily provide TV service and sell advertising. They often bundle phone and Internet connectivity together with television service for residential users. Some small businesses also use them.

They operate via a franchise agreement with local jurisdictions to be the registered cable operator in a given geographic area and do not have nationwide footprints. Depending on the region, they can provide coverage to a patchwork of localities.

 

DATA CENTER OPERATORS

Data Center Operator – Privately owned and operated facilities that house floor space for servers and equipment.

Data Center Operators usually don’t have their own networks but offer ILEC, CLEC and ISP services in their data centers. The primary business operation is to provide floor space for servers and switching equipment to be deployed with robust physical security, redundant power systems, complex HVAC systems and “meet me” facilities for telecom carriers to terminate services.

INDEPENDENT AGENTS

Agents are a major player in the telecom industry, although many businesses don’t even know they exist. The agent provides customer referrals to carriers and service providers in exchange for a set percentage of their contract, perpetually.

The relationship can be very lucrative for the agent, earning as high as 25% per referral for the lifetime of the customer’s service. It makes sense from a carrier’s position as well, paying independent referral agents a perpetual commission is expensive, but it’s a secondary option from paying a sales team to accomplish the same results.

Referral agents can be anyone, from tech support contractors to telecommunications consultants. Referrals are generally made with a, “I know a person at x company that can get you better Internet.” The referral is made to a salesperson at the Internet provider, and the silent agent receives his commission.

The difference between an agent and a salesperson, of course, is that a sales team is paid an ongoing salary, and any commissions for sales are usually one-time sums. An agent, on the other hand, receives a set percentage for as long as you have service. Until that customer changes carriers, the referral agent gets a percentage of the monthly bill.

The biggest difference, however, is for the customers that pay for the service. They can easily recognize a salesperson from “Acme Internet Inc.” But they are usually completely unaware when an “agent” is selling to them.

For this reason, dealing with independent referral agents doesn’t always benefit the customer. In some cases, agents are incentivized to sell for the highest commission, not necessarily the best match for the customer’s needs.

It’s important to realize that referrals are commission-based sales in most cases. Anytime you are considering going with a new carrier, it’s important to shop around – regardless of which telecom provider you are being referred to. That way you can be sure you’re getting the best possible service, rather than being an “easy sale.”

What’s Right For Your Business?

Choose a provider that offers the following:

  1. A scalable, flexible solution with easy growth opportunity
  2. Streamlined billing for direct services provided
  3. A trustworthy, established network with a track record of success
  4. An independent, top level data center
  5. Access to cutting edge telephony and connectivity services
  6. Customer service that is proactive, attentive and responsive

The overwhelming amount of choices means there is an ideal solution for every customer. Look for the one that is just right.

If you choose a small-scale provider, chances are you’ll be a big fish in a small pond. But there’s also a chance you’ll just be sold a commodity, or resold white-label services from a 3rd party carrier.

Choose too big, and you’re just an account number in a maze of customer service ambiguity. Not to mention the big guys have legitimate issues with disaster recovery due to massive infrastructure requirements.

Choose a more complicated, multi-vendor solution and you’ve got a confusing bill, multiple providers and your money is going out every month to a varied group of carriers and resellers that aren’t even directly providing you service or assistance.

 

Posted by
Tom Collins
Author Bio
Tom Collins is the Director of Enterprise Sales & Marketing for Atlantech Online. He has over 25 years of professional experience in the Internet Service Provider industry and is known for translating technology into positive results for business. A native of Washington, DC, a graduate from University of Maryland (degrees in Government & Politics and Secondary Education), Tom is also a five-time Ironman finisher.
A Fit Agile Framework

A Fit Agile Framework

A Fit Agile Framework

By Millie Paniccia

I’ve dedicated the last 18 years of my professional life to working in commercial software product development environments—from employee to executive. As the current Managing Partner of an advisory services firm, I have seen just about all there is to see in business, working with startups whose employees range from three to large organizations that have tens of thousands. Nowadays , I spend most of my time helping organizations with product delivery issues improve, scale or prepare for IPO.

I have learned that regardless of industry, organization size, or belief in an organization’s individuality, most of these environments actually have a lot in common—including teams comprised of good people with good intentions. Unfortunately, most of these teams are lacking a common framework in which to operate.

I believe that an Agile framework is much better than how we used to get work done. I have found that the application of Lean Agile principles with consistency, just like a good exercise regime, can transform how product is delivered to market.

How to Get Your Agile Principles In Shape

Extensive writing already exists defining Lean Agile principles and my intent is not to re-write these articles. The Scaled Agile Framework (SAFe ©) website and texts are an excellent resource on this topic. While all of the Lean Agile principles have value and importance, I have found organizations are most likely to get in shape and stay there, by staying mindful of the following.

  • Sometimes the only way to get into shape is to re-train teams, together
  • People really are doing their best 
  • Product and engineering need to be in the boat together

To read the full article click here https://www.tecveris.com/resources/getting-your-agile-framework-into-shape/

 

Posted by
Millie Paniccia
Author Bio
Millie has led PMO, Help Desk Operations, Software Development, QA and Product Development teams. Millie is a certified Scaled Agile Framework SAFe Agilist and strategic leader in Lean Agile adoption.
Does Regulatory Compliance Apply to My Business? Yes.

Does Regulatory Compliance Apply to My Business? Yes.

Today, almost all businesses are affected by compliance. Whether you’re in the healthcare industry and are bound by HIPAA regulations, or you’re a manufacturer attempting to meet NIST standards before you lose your government contract, your business cannot afford to be in the dark about compliance regulations.

What Technologies Should be in Place to Remain Compliant?

Data Encryption – All regulatory programs require organizations to encrypt and control their sensitive data. When data is encrypted and controlled with data loss prevention policies, the information is illegible– unable to be read without a secret key and proper permissions.

Data Life Cycle Management – It is easy to lose track of information after it leaves its original source. Do you know what happens to your data after you hit send on an email? Most regulatory standards require that you track exactly who sees that data and what they do with it. Data Life Cycle Management software allows organizations to track the entire lifecycle of their documents– and revoke access to that sensitive information at any time.

Disaster Recovery – What is the first step your business would take in the event of a breach? How long would it take to get up and running if you suffered a natural disaster? Being compliant means having a disaster recovery plan in place, and testing that plan regularly to ensure its effectiveness.

Next Steps

Due to the complexity of the requirements and what is at risk if you don’t comply, an IT resource that understands the complexities of maintaining compliance in your industry is essential. Consider a third-party resource, so you can focus on your business while they handle the rest.

Posted by
Advance Business Systems
Author Bio
Advance Business Systems helps organizations focus on their core mission by providing technology that can increase efficiency and effectiveness and services that eliminate the distractions that many organizations face. The right resources and a plan are critical to an organization achieving and exceeding their goals. Advance provides services such as IT planning and support that will take IT off your plate, keep you from worrying about data security and position your business for the future. Having the right business technology solutions in place, such as multifunctional copiers, interactive white boards and document management software, can greatly improve the flow of information through an organization.
The Specter and Meltdown Vulnerabilities: a CPU/Architecture Perspective

The Specter and Meltdown Vulnerabilities: a CPU/Architecture Perspective

Specter and Meltdown, names given to a recently discovered vulnerability that affects almost every computer chip manufactured in the last 20 years. If exploited, attackers could gain access to data previously considered completely protected. The Specter and Meltdown flaws work by exploiting two important techniques used to make CPU chips execute faster, called speculative execution and caching.

Speculative execution allows a CPU to attempt to predict the future to work faster. For example, if the chip determines that a program contains multiple logical branches, it will start calculating the values for all of the branches before the program decides which branch to take. When the correct branch is determined, the CPU has already produced the values for that branch. If the CPU sees that the same function is frequently used, it might use idle time to compute that function so it has what it thinks the answer will be ready if needed.

Caching is used to speed up memory access. Random access memory (RAM) is located on separate chips and it takes a relatively long time for the CPU to access data in the RAM. There is a special small amount of memory storage called CPU cache that is built on the CPU chip itself that can be accessed very quickly. This cache memory gets filled with data that the CPU will need soon or often. Data that is produced by such speculative execution is often stored in the cache, which contributes to making it a speed booster. The problem arises when caching and speculative execution start circumventing protected memory.

Protected memory is a foundational concept underlying computer security. It allows a program to keep some of its data private from some of its users, and allows the operating system to prevent one program from seeing data belonging to another. In order to access data, a process needs to undergo a privilege check, which determines whether or not it’s allowed to see that data.

A privilege check can take a relatively long time. Due to speculative execution, while the CPU is waiting to find out if a process is allowed to access that data, it starts working with that data even before it receives permission to do so. The problem arises because the protected data is stored in CPU cache even if the process never receives permission to access it. Because CPU cache memory can be accessed more quickly than regular memory and due to the long latency associated with privilege checks, the process can potentially access certain memory locations that it shouldn’t be allowed to access. As this problem exists in the hardware there is no direct way to correct it. Software patches have been offered to mitigate the exposure but have led to some degradation in performance of the CPU. In many cases, the software patch is targeted at a specific product and installing the wrong patch can severely impact system operation.

The most immediate action security teams and users can take to protect computer systems is to prevent execution of unauthorized software and avoid access to untrusted websites. Security policies must be are in place to prevent unauthorized access to systems and the introduction of unapproved software or software updates.

Posted by
Written by: Prof. Bill Pierce. Submitted by Ivana Shuck
Author Bio
Prof. Bill Pierce, the author of this article, is an Assistant Professor of computer science at the Department of Computer Science & Information Technology at Hood College in Frederick, Maryland. He teaches undergraduate and graduate courses in Computer Architecture, Digital Logic and Switching Theory, Digital Signal Processing and Musical Computing.*