Call Us (877) 740-5028
IT infrastructure, what we’ve all come to know and love (or hate, depending on your viewpoint) is the driving force behind all of our machines. What if it were to be so widely available that you didn’t even have to think about it when building a program? In a less dramatic sense, this is the definition behind invisible infrastructure. History Remember back in the day when personal computer commercials used to say “powered by a 6 core processor with a million gigs of RAM and other awesome technical specs”? That’s because in those days, it was important to the consumer that their hardware was powerful and capable. But in today’s world of software-defined everything, edge computing and cloud-native applications, it’s expected that infrastructure is available, reliable, secure and compliant. To put it simply, it just has to work. Invisible advances? In the last 10 years, there have been huge advances in the hardware and data center worlds, but the rise of cloud computing especially means the location of our data is not as important as it once was. Data centers are built literally all over the world, and the hardware that powers them is cheaper and more readily available than ever….
We talked a little about serverless computing, and its basic building blocks known as functions in an earlier post. Functions have their own service, conveniently called Function as a Service (FaaS). What are functions, and how do they relate to serverless computing? FaaS is the concept of serverless computing using serverless architectures. It especially affects software developers, who can leverage this concept to deploy an individual “function”, action, or piece of business logic. These functions are expected to start within milliseconds and process individual requests before ending. The developer doesn’t have to worry about writing code to fit a given infrastructure–all he has to do is write code, and upload it to a serverless platform such as Azure Functions or AWS Lambda, which takes care of the rest. Like serverless computing, FaaS has its drawbacks. It’s best for event-based architectures (e.g., adding a file to blog storage, which would trigger a function to complete a task like sending an email or updating a database) such as Azure Event Grid (link), Microsoft’s serverless event-based management service. It’s not good for large applications, where containers (link) might be a better fit. The principles of FaaS include: Complete abstraction of servers away from the…
The internet is moving from an exchange of information to an exchange of value. E-commerce, online banking, healthcare, and human resources are just some of the industries that offer services in exchange for value. And it’s not just monetary value, either. Data has become extremely valuable as well. To conduct these value-based transactions, people use cryptocurrencies such as Bitcoin, and their underlying technology: blockchains. In this post, we’ll explain what a blockchain is, how it works, and its potential uses. A blockchain is basically a decentralized database kept in a network where each record (or transaction) is linked to the previous one and updated in real time. This means the data is nearly impossible to corrupt, because altering one record in the chain requires altering every record afterwards, and a majority of the network has to agree to the changes. For transactions that appear later in the chain, this isn’t as difficult, but in a chain such as Bitcoin, new transactions are added every 10 minutes. So, changing a record would either require extreme finesse in timing, or a huge amount of computing power that most people don’t have, keeping the records safe from unwanted alterations by attackers. Various blockchains…
There’s been a lot of buzz around the idea of Software Defined Everything, and it’s considered the next “big thing” in IT. Is your organization ready for the promises of a software-defined world? If not, what do you need to consider to be prepared? What is Software Defined Everything? It’s an overarching term that includes software-defined networking, (SDN) storage, (SDS) and data centers (SDDC). It is also called Software Defined Infrastructure or SDx (Software-Defined Anything). With the infrastructure virtualized and automated, SDE greatly speeds up IT management and makes it easier to scale up and down, (so buying services by the sip from cloud providers becomes easier in turn). The hope is that by separating the software from the bare metal, the underlying hardware will be less expensive and interchangeable while the software becomes more capable and faster-evolving. Let’s dive a little deeper into what virtualizing all that hardware really means. The teams responsible for each component of infrastructure (storage, compute and network) will need to work together more, with the end goal of creating the ultimate user experience. The idea of merging once-separate departments with their own goals, priorities and budgets is easy to picture but hard to execute….
As we get more devices and connect them to the cloud, it’s only natural that the rise of user-centric computing should follow. What is that, and what effect does it have on the IT industry? Let’s start with defining user-centric computing. According to OneStopClick, “A user-centric computing system is a ubiquitous system consisting of information and devices that users can access anytime and anywhere.” Wikipedia says, “The chief difference from other product design philosophies is that user-centered design tries to optimize the product around how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product.” Both of these definitions are similar, but there’s a little more to it than that. When the focus is on the user instead of the device, that also means the focus is more on the psychological aspect and less on the machine. User-centered attention What does it mean to make the user the center of attention? Part of it comes from designing applications that are not only easy to use, affordable, and integrate between devices well, but they also truly are about the human who’s using them. What’s the user’s current situation? Does their current…
The growing number of devices connected to the Internet of Things or IoT, provides an endless number of possibilities for our world. But as more and more devices come out, it’s impossible to ignore the security of them. One non-profit, the Online Trust Alliance, has taken steps to address the problem of cybersecurity in the IoT by releasing an updated IoT Trust Framework. The framework, originally released in 2015, includes 31 principles of trust and privacy that it recommends IoT manufacturers follow. BYOD is becoming more and more standardized in the workplace, and with that comes the rise of the IoT. Because of this boom in popularity, many manufacturers are in a hurry to meet demand and don’t always consider the total security of the devices they are making. The IoT Trust Framework aims to serve as a guide to IoT manufacturers to cut down on the number of security vulnerabilities associated with IoT devices. According to research by the OTA, 100 percent of recently reported vulnerabilities in IoT devices could have been avoided if this framework of trust was followed. Internet of Things security testing One important guideline the framework offers is more security testing for IoT devices. Testing is…