Sunday, 16 December 2018

Pulse Secure VPN connections to SRX devices may experience traffic loss to remote-protected resources

Alert Type:

PSN - Product Support Notification
Product Affected:
Pulse Secure Desktop Client 5.1Rx
Windows 10 April 2018 update (Redstone 4, version 1803)
Windows 10 October 2018 update (Redstone 5, version 1809)
Alert Description:
Pulse Secure Desktop clients on Windows 10 running April or October 2018 updates, may encounter traffic loss issues when attempting to reach remote-protected resources behind SRX.

Traffic loss will exhibit the following characteristics:
  • Client traffic will arrive at SRX via VPN tunnel as ESP packets
  • SRX will decrypt traffic and pass to remote-protected resource
  • Remote-protected resource will reply sending traffic back towards SRX
  • SRX will encrypt traffic and send to client via VPN tunnel as ESP packets
  • ESP VPN packets will be received by client LAN adapter
  • Decrypted packets will not be reported on VPN virtual adapter
  • Client application will not report receiving packet from remote-protected resource
Solution:
Pulse Secure continues to investigate solution options related to the traffic processing in the client virtual adapter.

At this time, there are no known work-arounds while using SRX Dynamic-VPN while using Pulse Secure Desktop clients on Windows 10.

This article will be updated monthly or upon any new progress updates.
Implementation:
Windows 10's update version in use can be located as follows:
  •    Right click on Start/Windows bar
  •    Select System
  •    Scroll to Windows Specifications section
  •    Locate Version Number
 
Version #   Release Date  Marketing Name
  • 1509        Jul 2015       (Threshold 1)
  • 1511        Nov 2015      (Threshold 2 / November Update)
  • 1607        Aug 2016      (Redstone 1 / Anniversary Update)
  • 1703        Apr 2017       (Redstone 2 / Creators Update)
  • 1709        Oct 2017       (Redstone 3 / Fall Creators Update)
  • 1803        Apr 2018       (Redstone 4 / April 2018 Update)
  • 1809        Oct 2018       (Redstone 5 / October 2018 Update)
Note:  Windows 10 default behavior is to auto update Microsoft cumulative updates including the April and October 2018 updates

Saturday, 3 November 2018

Be Ready for Cloud, 5G and IoT with Advanced Security Acceleration

Now more than ever, our networks and infrastructure require security that keeps pace not only with cybercrime, but with the demands of ubiquitous streaming, a myriad of devices and accelerated cloud evolution. This explosive growth and fluid environment means that organizations need more muscle from their firewalls.

To address this trend, Juniper Networks is introducing a new services processing card (SPC3) for the SRX5400, 5600 and 5800 Next-Generation Firewalls. With up to 11x performance gains, this new card transforms the SRX5000 line of Services Gateways into one of the most powerful firewalls on the market. With the addition of the SPC3 Advanced Security Acceleration card, next-generation firewall services can run without slowdowns or interruptions. And when business needs call for additional capacity, expansion is modular, making scalability simple.

Powerful. Scalable. Extensible.
Service providers today are encountering an explosion of mobile devices, IoT connections and media traffic streaming. The SRX5000 line with SPC3 Advanced Security Acceleration is 5G ready, delivering performance and scale with capacity to spare. For maximum uptime, SPC3 cards can be added to any of the SRX5000 chassis without service interruption, assuring the highest uptime and security continuity. With SPC3 Advanced Security Acceleration, the SRX5000 line is ideal for Gi firewall, roaming firewall and security gateway use cases, assuring maximum defense.

As enterprises build out their clouds, the SRX5000 line with SPC3 is the optimal choice for defending the data center edge with next-generation firewall security features along with SSL decryption to mitigate threats hidden in encrypted traffic. For multicloud, it delivers high performance and protected connectivity with maximum session capacity. In a headquarters environment, the SRX5000 line with SPC3 can act as a multi-services gateway to support large-scale VPN hubs.

Providing customers with grow-as-you-go expandability, the SRX5000 line with SPC3 enables customers to support the scale and performance needs of today with future-proof expandability when higher performance, greater scale and/or additional security is required.

Accelerated NGFW for the Unified Cybersecurity Platform
As an integral part of Juniper’s unified cybersecurity platform, the SRX5000 line of Services Gateways with SPC3 delivers the power, scalability and extensibility needed to fully activate next-generation security features with seamless integration with malware detection, threat behavior analytics and automated policy and remediation capabilities.

SRX5400, 5600 and 5800 Next-Generation Firewalls running SPC3 are the best choices for enterprise customers and service providers that need a high-performance and scalable next-generation security solution.

Sunday, 7 October 2018

Edge computing is the place to address a host of IoT security concerns

Edge computing can greatly improve the efficiency of gathering, processing and analyzing data gathered by arrays of IoT devices, but it’s also an essential place to inject security between these inherently vulnerable devices and the rest of the corporate network.
First designed for the industrial IoT (IIoT), edge computing refers places placing an edge router or gateway locally with a group of IIoT endpoints, such as an arrangement of connected valves, actuators and other equipment on a factory floor.

Because the lifespan of industrial equipment is frequently measured in decades, the connectivity features of those endpoints either date back to their first installation or they’ve been grafted on after the fact. In either case, the ability of those endpoints to secure themselves is seriously limited, since they’re probably not particularly powerful computing devices. Encryption is hard to cram into a system-on-a-chip designed to open and close a valve and relay status back to a central control pane.

IIoT can be a security blind spot

As a result, IIoT is a rich new target opportunity for malicious hackers, thanks in large part to the difficulty of organizing and gaining visibility into what’s happening on an IIoT, according to Eddie Habibi, CEO of PAS Global, an industrial cybersecurity company who has been working in the industrial control and automation for about 15 years.
A lot of connected IIoT devices have known, exploitable vulnerabilities, but operators might not have the ability to know for certain what systems they have on their networks. “The hardest thing about these older systems that have been connected over the past 25 years is that you can’t easily do discovery on them,” he said. Operators don’t know all the devices they have, so they don’t know what vulnerabilities to patch.
It’ll be decades, Habibi said, before many IIoT users – whose core devices can date back to the 1980s and even the 1970s – update this important hardware.

Edge networks provide security

That’s where the edge comes in, say the experts. Placing a gateway between the industrial endpoints and the rest of a company’s computing resources lets businesses implement current security and visibility technology without ripping and replacing expensive and IIoT machinery.
The edge model also helps IIoT implementations in an operational sense, by providing a lower-latency management option than would otherwise be possible if those IIoT endpoints were calling back to a cloud or a data center for instructions and to process data.
Most of the technical tools used to secure an IoT network in an edge configuration are similar to those in use on IT networks – encryption, network segmentation, and the like. Edge networking creates a space to locate security technologies that limited-capacity endpoints can’t handle on their own.
Mike Mackey is CTO and vice president of engineering at Atonomi, makers of a blockchain-based identity and reputation-tracking framework for IIoT security. He said edge computing adds an important layer of trust between a company’s backend and its potentially vulnerable IIoT devices.
“[N]ow you’re adding network translation to the end-to-end communication between that IoT device and whatever it’s ultimately communicating with, which, today, is typically the cloud,” he said.
Other experts, such as Windmill Enterprise CEO Michael Hathaway, also highlighted that widely used cloud-based backends pose problems of their own. Enterprises are losing control over their security policies and access with every new cloud service they subscribe to, he said.
“Enterprise customers can be very nervous about hooking up an automation system directly to the Internet – it needs a last layer of intelligence and security,” Hathaway said.
Consequently, some of the most effective IIoT implementations can be those that leave the existing structures and networks in place – hence the popularity of the edge architecture, which works both as a buffer and a link between the IT network and a company’s operational technology.
Russ Dietz, chief product security officer at GE Digital, said that old-yet-irreplaceable technology already on the factory floor plays an enormous role in shaping the IIoT infrastructure laid on top of it.
“Over time, we might migrate to a fully digital world where we blend those two together, but because industrial is going to live in this very long-tail environment, we have to be able to provide separate trust for both of those,” he said. “So we may weight how much we trust sensors in a different category than how much we trust a control system.”

Edge networks must fit unique sets of needs

According to Hathaway, it’s important to recognize that not all edge solutions are created equal, and that different businesses will have different requirements for an edge computing deployment. An automotive manufacturer might need to track a lot of process-oriented data and rate information about productivity, while an oil-production facility is likely to need to track things like pressures and volumes through a vast array of pipelines.

“You can’t possibly have provided a cookie-cutter solution,” said Hathaway, adding that, while the tools and approaches used will have commonalities, everyone’s security needs will be different.
The eventual hope for most IIoT deployments is that they provide enough machine-generated data to help businesses make smart decisions for the future, according to Simon Dowling, CTO of edge compute vendor ORI.

Protecting the data those machines send back for analysis – whether at the edge layer or back in the cloud or data center – is of paramount importance.

“As we’re moving towards a world where there is – whether it’s industrial IoT or it’s more commercial/consumer-focused IoT – a level of expectation that these devices will provide more meaningful action,” he said.

And if businesses want to stay on top of cybersecurity threats, they have to realize that it’s not simply a matter of pushing out updates and getting the latest and greatest technology up and running on their systems, said Aruba/HPE's vice president of strategic partnerships, Mike Tennefoss. It’s also understanding the way those updates and additions will tie into the operational technology stack.
“Security is the heart and soul of IT, and what you see happening is that IT systems and processes of cybersecurity are pushing down deeper and deeper into the operational technologist’s realm,” he said.

Saturday, 1 September 2018

Juniper Bringing 400GbE to PTX, QFX, MX Switches and Routers

Juniper Networks officials are continuing to push their product portfolios toward 400 Gigabit Ethernet as they eye the bandwidth demands that will be coming with the migration to 5G networks and the increasing adoption of such modern technologies as cloud computing, 4K video, and augmented and virtual reality.

As part of the 400GbE roadmap unveiled July 24, the company later this year and in 2019 is bringing 400GbE capabilities to its PTX, QFX and MX series switch and router lineups aimed at data centers, WANs, enterprises and telecommunications for use cases such as cloud services, hyperscale environments, network backbones and data center interconnects. The refresh of the switches and routers is the most recent step in Juniper’s push toward 400GbE, including the announcement last month of its 400GbE-capable Penta Silicon.

In addition, company officials said plans are underway for new generations of ExpressPlus and Q5 silicon to support 400GbE, as well as other features.

According to Manoj Leelanivas, executive vice president and chief product officer at Juniper, the work the vendor is doing around 400GbE—not only with upgrades to its products but its work to ensure the QSFD-DD spec for 400GbE kept the same interface densities as those with 100GbE—will give businesses an easy migration path and improve both bandwidth and costs.
“Customers will realize the economic benefits of a 400GbE solution that breaks the historic cost-per-bit economics cycle that has been seen time and time again,” Leelanivas wrote in a post on the company blog. “Delivering routing and switching platforms that offer investment protection when transitioning from 100GbE to 400GbE will also inspire our customers to pursue new applications—thanks to the significant amount of bandwidth now available to them.”
The industry is primed for the arrival of 400GbE shipments. According to a report earlier this year by analysts at Crehan Research, initial shipments of 400GbE switches will come this year and grow significantly. By 2022, most of the Ethernet network bandwidth in data centers will be 400GbE. Shipments of 100GbE systems will surpass those of 40GbE, three years after initial shipments hit the market, illustrating the demand for faster networks.

"Beginning with high-density 100GbE systems, we entered a new era of much faster data center switch upgrades, and that trend is predicted to continue with 400GbE," Seamus Crehan, president of Crehan research, said in a statement in January. "With its expected market-leading price per gigabit and no foreseeable shortage of demand for higher-speed networking capacity in cloud data centers, 400GbE should surpass a million ports shipped in less time than it took 100GbE to reach that threshold."

Organizations are increasing the capacity of their data centers to address growing high-performance applications and as the connectivity in their servers moves to 50GbE and 100GbE uplinks, according to Juniper officials. The vendor is enhancing its QFX series of switches with 400GbE capabilities, including the 3U (5.25-inch) QFX10003, which will offer 32x400GbE and can scale up to 160x100GbE. It will be powered by the next-generation Q5 silicon and offer a deep buffer enabled by Hybrid Memory Cube, which will enable it to handle spikes in network traffic and reduce application latency, they said.

It will be available in the second half of this year.
The 1U (1.75-inch) QFX5220 will run on merchant silicon and offer 32x400GbE, as well as 50GbE, 100GbE and 400GbE interfaces for server and inter-fabric connectivity. The switch will be available in the first half of 2019.

For the WAN, Juniper officials introduced the 3U PTX10003 Packet Transfer Router for backbone, peering and data center interconnect applications. The system can be used for high-density 100GbE ad 400GbE deployments and is aimed at scale-out and cloud environments. The router, due in the second half of the year, includes native MACsec support for 160x100GbE and FlexE support for 32x400GbE interfaces.

As part of the 400GbE roadmap, Juniper officials also pointed to the MX Series 5G Universal Routing Platform, which was announced in June and is powered by the new Penta Silicon and offers 400GbE interfaces.

Wednesday, 1 August 2018

Red Hat's only business plan is to keep changing plans

At the recently concluded Red Hat Summit, Red Hat CEO Jim Whitehurst said old-fashioned business planning is dead. It's being replaced trying multiple ideas at once, dumping those that don't work, and doing all this as quickly as possible.

Before your eyes glaze over, keep in mind Red Hat has had 64 straight quarters of revenue growth. The company, while best known for its Linux operating system, Red Hat Enterprise Linux (RHEL), has transformed itself into a cloud power with Red Hat OpenShift. And, it's well on its way to becoming the first billion-dollar-a-quarter open-source company. Red Hat knows business.

Drawing from his keynote speech and a pair of interviews, Whitehurst explained, "In a world that is less knowable, where we're solving problems in a more bottom-up approach, our ability to effectively plan into the future is much less than it has been in the past."

That's because, Whitehurst explained, in today's world, you must live with ambiguity. Yesterday's business planning was for "companies that were optimized for a world that moved at a slower place". These old tools aren't optimized for today's world.

For example, Whitehurst said he's spoken recently to COO of a large bank. "They wanted to talk about what their major strategic technology initiatives should be for the next three years. I said, 'Time out. The likelihood that you'll choose the right initiative is next to none.' We don't even know how payment systems will look in three years."

Does that sound crazy to you? Think about it. Whitehurst cited the example of car companies. "If you were GM a few years ago, you planned on how to compete against Ford and Chevy. Now Uber has changed all that. We don't even know if people will continue to buy cars." Banks? Who knows what Blockchain will do to banks.

Whitehurst cited big data as one instance where old-style planning could have hurt your company. "For example, there were hundreds of big data programs They've been winnowed down to two or three winners." But, at the start, had you bet the farm on one and it turned out not to be a winner, you'd have wasted your time and energy


The prime problem is, "When you plan, you need to make assumptions. We can't do that now. You're trying to plan for a world that probably won't exist."

So, in our "uncertain, volatile world, where you can be blindsided by orthogonal competitors coming in, you have to recognize that you can't always plan or know what the future is," said Whitehurst.
Instead, you configure your company for "change without necessarily knowing what the change will be. You must empower your staff with the knowledge and tools to make the right business decisions. This is done with "a greater level of engagement by people across the organization, to ultimately be able to react at a faster speed to changes that happen."

The best way to do this is with an Agile application development approach of trying, learning, and then modifying business decisions on the fly. To do this you must make the best use of business programs not so much "automating what people are doing and more about helping to better enable people with tools and technology," Whitehurst said.

To make this work, Whitehurst said executives must use those tools to engage with staff in real time to make the right decisions and take advantage of opportunities.

How? At Red Hat, he said, "We try ten things in 90 days, and we'll kill most of them. We go into it knowing that's what's going to happen but from that we'll learn and make decisions. We try to make small bets and iterate quickly."

That doesn't always mean Red Hat makes the right plans, Whitehurst admitted. "When we started OpenShift, we weren't using Kubernetes. In a traditional organization, we'd ride our old plan into the ground. But, when we saw Kubernetes winning, we switched to Kubernetes and OpenShift is a winner."

So, in Red Hat's business "plans", Red Hat recognizes it "doesn't know where the future is going, but we're willing to admit we're wrong and we can pivot to a new plan that will succeed."
And afterwards? "We intentionally don't go back to the plan and see if we're executing against old plans. As soon as you commit to a rigorous plan, you double down and [make] rigorous bad decisions." Instead, Red Hat is "open to change and we hold people accountable for how hard they're working, not how closely they've stuck to a plan."

Is this efficient in the way old business plans were? No. But we're not living in the 20th century. Sure, if your business is Apple, Amazon, or Walmart, you can still focus on ironing out the least inefficiencies in your supply chain. Good for you. But, for many businesses today "innovation is core important than efficiency," concluded Whitehurst.

Based on Red Hat's impressive track record, he's got a point.

By for Linux and Open Source

Tuesday, 24 July 2018

Analyst Reports: Juniper a Leader in Enterprise Networking

There are countless ways to analyze a market so it’s no surprise that different analysts will come to different conclusions when evaluating companies. In this year’s enterprise data center analysis, it’s different. We feel the major analyst firms have reached the same conclusion: Juniper Networks is a leader in the enterprise networking space.

The Results
Earlier this year, Forrester Research listed Juniper as a leader in The Forrester Wave™: Hardware Platforms For Software-Defined Networking, Q1 2018.

And we believe that finding has also been reached by being named a Leader in Gartner’s latest Magic Quadrant for Data Center Networking 2018, where they write:


GARTNER HI RES.png

 
We believe Gartner has comprehensive criteria when evaluating companies for the Magic Quadrant. Some of the criteria involves a vendor’s competitive offering, compelling vision to move the industry forward and traction in the marketplace. Juniper is confident in the execution and strategy that we believe is being recognized by various analysts.

Execution
We have grown our enterprise business to north of approximately $1.4 billion in annual revenue. Built by the best engineering team in the industry and backed by a world-class support and services organization, we have the breadth and depth of portfolio to solve enterprise needs from data center to branch, from hardware to software, from routing and switching to security and from transport to orchestration.
  
Vision
A company’s strategy reflects its take on how the market will evolve. Our strategy is centered on the transformation the cloud is driving across all corners of the market. Our focus is on engineering simplicity for our customers and partners and helping them in their transition to the cloud.

Our Difference
Whether it’s building private clouds or leveraging public clouds, the race to the cloud is on. Increasingly, enterprises are choosing multicloud. Juniper’s multicloud-ready hardware powered by Junos combined with our multicloud management solution underpinned by Contrail Enterprise Multicloud, we are well-positioned to help enterprises navigate this migration.

Multicloud also means multi-vendor, so we have built our portfolio on the principles of being open. We believe multicloud is about managing resources as a single, cohesive infrastructure with consistent policies and operations regardless of what vendors you use. Customers choose us to stitch together all the disparate parts of their network, where each component is not only insertable and manageable, but also replaceable.

Whether it’s leading the standards push for open protocols, developing richly programmable interfaces to our software or supporting the open source community through efforts like Tungsten Fabric (previously OpenContrail), Juniper is tackling the future in a way that avoids unnecessary lock-in.

We are also different in our view that the future is not just an incremental turn of the crank. Where some companies’ portfolios are mostly derivative work built as a follow-on to their legacy efforts, we have built a portfolio that leverages technology and innovation to leapfrog the status quo. Yes, it is good to reduce cost, but the architectures that harness multicloud will be more than cost-cutting deployments. This is about operational transformation, and that requires a different way of thinking.

More than Just Products
Cloud and multicloud are, at their core, an evolution of operations. Gone are the days when individual enterprise silos can be architected, deployed and managed separately. The power of multicloud is about bringing the full enterprise network together as a cohesive entity.

And such change requires more than just products. Enterprises might start with technology, but their successful path to the cloud will be dependent on navigating the tooling and process implications of multicloud. Only those companies with global reach within their support and services organizations will be well-equipped to act as stewards along the journey.

Ultimately, incumbents in any industry make their living on selling ‘here’. Competitors try to carve out their place in the market by selling ‘there’. At Juniper, we think of things differently. We provide value by helping our customers get from ‘here’ to ‘there’.

For more information, please visit: www.juniper.net/dc-leader

Gartner Magic Quadrant for Data Center Networking, Andrew Lerner, Joe Skorupa, July 2018.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Juniper Networks..
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
.

Sunday, 1 July 2018

Brand and Technology – How the Digital Experience Affects Brand Loyalty

Marketing departments have long been key technology drivers – until recently with the consent of the IT department. The tech team have been more than happy to support laptops with PowerPoint installed, specialist marketing software and customer databases for instance.
In recent years marketers have had a tempting choice of tools, such as marketing automation, SEO and CRM, to help them gain a competitive advantage.
The challenge to IT departments has been to assess, implement and integrate these tools in a timely way. As the recent Logicalis CIO Survey shows, the majority of CIOs say between 60-80% of their time is spent on day-to-day tasks. Little wonder then that, in the face of increased security threats and ‘keeping the lights on’, this has not been a top priority for their resources.
The result has been a rise in Shadow IT; the introduction of SaaS services, mobile devices, apps and other third party offerings into the corporate environment with little, if any, IT department involvement. So much so that, as the Logicalis CIO Survey has consistently shown, Shadow IT is increasingly accepted, even embraced.
But is this disconnect between marketing and IT something we should be concerned about?

Marketing and IT – Time to Hold Hands?

The short answer is ‘yes’.  That is, as the march to digital transformation continues, businesses should be asking themselves just how well marketing and IT are working together to ensure customers’ experience of the brand is positive.
The precise role of this marketing and IT marriage depends on the nature of the business – but in all cases, it is ever more important.
For disruptive digital businesses, like Uber, Airbnb and Amazon, technology is the brand, so it’s importance is glaringly obvious.
For longer established businesses, the picture is less clear-cut, but just as important.  In these cases, the bare minimum is that day-to-day customer facing technology does not negatively affect the brand. Unfortunately there are all too many examples of companies getting this wrong.

Bad Wi-Fi and your Brand

While travelling in Australia last year, I was struck by the number of hotels that made a charge for Wi-Fi and internet access.
I commented on this to a colleague in Melbourne and he agreed. He told me that the best-kept secret in Melbourne was a coffee chain that offered exceptional free Wi-Fi.
That certainly resonated with me.  I don’t know how many times I have chosen meetings venues based on the quality of the Wi-Fi  – even if the coffee isn’t great.
Similarly, one of the train operators in the UK offers free Wi-Fi for passengers. If you are on a standard class it is, as one Twitter user said, “like surfing through the eye of a needle.” Not great.  If you are on First Class, you get a much better service.
The approach entrenches the existing negative feelings passengers have about the brand.

Bad Retail Tech

National newspapers have recently reported the chaos caused when a bank updated its online services. At the time of writing it is producing a maelstrom of negative brand chatter – and they are not alone.
Cash machines that don’t work and faulty card readers in shops and restaurants all produce a sinking feeling in customers, borne from the disappointment that they cannot give a business their custom.

Bad Tech and Customer Service

So many calls to customer service lines are still plagued by, slow, faulty and siloed computer systems – and these are all issues that make it harder for staff to delight their customers.
Why do you have to transfer me to someone else because “that’s on a different system”? Why do I get lost in the phone system and why, when I do get through, does my information not follow me, so I have to repeat the problem all over again? That question has already been answered – it’s on a different system.
It’s tough enough for call centre staff without throwing a tech spanner in the works.

Data Protection and Your Brand

In a 2017 survey a massive 70% of consumers stated they would stop doing business with an organisation if it experienced a data breach.
Furthermore, 93% of consumers said they would take or consider taking legal action against a business that has been breached.
The question here is whether the marketing function is working with IT on data security and incident planning? Or is it only a marketing problem once the worst has happened?

One Strategy for the Whole Business

When it comes to technology and the customer experience, it is no good simply asking sales and marketing people to map the customer journey and build communications to fit.
The whole business needs to be involved and two important issues need to be addressed.
First, and strategically, the question should not be “what shall we do with this tech”? (be it AI, machine learning, data analytics or the entire digital transformation) but “what experience do our prospects, leads and customers want, and what tech would best deliver that?”
Second, resources need to be made available to IT departments to comfortably prioritise this: Only 25% of CIOs outsource more than 50% of their IT. Delivering greater revenue and better customer experiences doesn’t need to mean massive internal reorganisation.


Joanne Nelson, VP International Marketing, Logicalis, looks at the technology user experience and its influence on brand perception.

Sunday, 10 June 2018

8 ways to build a future-proof organization

by Chris Gagnon and Aaron De Smet
 
Here is today’s reality: The average large firm reorganizes every 2-3 years and it takes over 18 months. With technology advances changing everything, wait and see isn’t an option.

Those who get it right are creating adaptive, fast-moving organizations that respond quickly and flexibly to opportunities and challenges. They move intelligent decision-making to the front lines. Their process functions more like a network and less like a chain of command. Gone is the standard, “safer” modus operandi.

One famous retailer empowers its call center employees and, in turn, delivers “wow” service. Instead of being a place typically associated with high stress and a slog for employees, this retailer gave each team member the freedom and authority to truly build relationships with customers. This has led to greater sales numbers, customer engagement and loyalty.

We have identified eight emerging characteristics of the organization of the future. We see versions of these elements so often, they provide at least the organizational outline to win:
  1. Worship speed. It’s an imperative. Look at Amazon CEO Jeff Bezos’ April 2017 letter to shareholders. Bezos highlights making “high-velocity” decisions. “If you’re good at course correcting,” he contends, “being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”
  2. Shift to an emergent strategy. One that entails a relentless quest and an undefined end point. The pursuit should involve unceasing questioning of “How do we add value?” and an organizational platform that formulates an emergent mix of multiple strategies executed immediately.
  3. Unleash decision-making. Grasp how your organization operates and reflects its core proposition. This includes understanding how to handle “big-bet” decisions shaping your company’s future, “cross-cutting” decisions like pricing and new product launches, and low-stakes “delegated” and “ad hoc” decisions that arise unexpectedly.
  4. Reimagine your structure. The more interconnected your organization and the more decision-making diffuses, the easier it becomes to sustain high performance. Even the most hierarchical chain of command – the U.S. military – moved to decentralize decision authority to help beat back Al Qaeda’s Iraqi-based forces. Free your initiatives and decisions from unnecessary hierarchy.
  5. Personalize talent programs. New people analytics tools are helping organizations manage and develop their people with greater precision. After extensive training, for example, a fast-food restaurant chain identified and taught behaviors that inspired colleagues.
  6. Rethink your leadership model. Leadership can come from anyone. In agile organizations, leaders lead more by influence than control. When we ask executives how to solve a given issue, only a few consider how to create conditions in which an ecosystem can largely self-manage, where individuals learn and problems are avoided before they manifest. Yet, we believe the future will demand this.
  7. Adopt a recipe to run the place. Siloed firms execute a wide array of processes and practices differently across the organization, generating an incongruous hash. The healthiest firms able to sustain performance and renew over time employ a simpler approach. They don’t sample à la carte.
  8. Cultivate purpose, values and social connection. Future organizations will emphasize aligning around common principles. Participants will use defined rules of engagement in decision-making, collaborate to create value and earn the credibility to lead rather than have leadership imposed from above.
Reorganizations are hard to get right. They distract senior leadership on down. They trigger real consequences for meeting investor expectations. They run the risk of bewildering employees. But in the face of today’s massive disruptions, an ethos of urgency actually serves to smooth gyrations between “hurry up” and “settle in.”

Those who get it right create adaptive, fast-moving organizations that respond quickly and flexibly to opportunities and challenges. They move intelligent decision-making to the front lines. Their process functions more like a network and less like a chain of command. By combining urgency with agility, capability and identity, you generate an organization that can play fast and long. That’s the future.

Friday, 1 June 2018

17.4R1-S3: Software Release Notification for Junos Software Service Release version 17.4R1-S3

Alert Type:

SRN - Software Release Notification
Product Affected:
MX, MX 10003, PTX 10008/10016, PTX3000/5000, EX4300, EX4600, EX9200, QFX5100, QFX5110, QFX5200, QFX10002/8/16, SRX1500, SRX300, SRX320, SRX340, SRX345, SRX4100, SRX4200, SRX4600, SRX5400, SRX5800, SRX550 M, VMX, VMX-nested, VRR, Network Agent, vSRX, NFX250
Alert Description:
Junos Software Service Release version is now available for download from the Junos software download site
Download Junos Software Service Release:
  1. Go to Junos Platforms - Download Software page
  2. Select your product
  3. From the Type/OS drop-down menu, select Junos SR
  4. From the Version drop-down menu, select your version
  5. Click the Software tab
  6. Select the Install Package as need and follow the prompts
 
Solution:
Junos Software service Release version 17.4R1-S3 is now available.

The following are incremental changes in 17.4R1-S3.

 
PR Number Synopsis Description
1115686 RPD memory leak caused by repeated RSVP RSB (reservation state block) deletes When an RSVP path is deleted (because of LSP deletion or switch-over to new path) RSB (Reservation state block) data structure has to be deleted to free up memory. When RSB deletion is performed, LSP attribute object in RSB is not deleted by RPD. This causes build up of RPD memory usage over a period of time (memory leak). Build up of RPD memory is proportional to the frequency of RSB deletes.
1265548 Traffic drop on MPC with "Link sanity checks" and "Cell underflow" errors When certain hardware transient failures occur on an MQ-chip based MPC, traffic might be dropped on the MPC, and syslog errors "Link sanity checks" and "Cell underflow" are reported. There is no major alarm or self-healing mechanism for this condition.
1275766 The rpd may crash in LDP L2circuit scenario In an L2 circuit scenario, while processing an advertisement of LDP signaled L2 circuit, it gets stale binded because of the corrupted LDP structure. As a result, the rpd crashes.
1278153 After bfdd restart seen issue with ng mvpn and l2vpn route exchange causing mVPN and vpls traffic drop bfd daemon kill or restart on PE router is causing issue with ng mvpn and l2vpn route exchange and result is traffic drop. Work around is to clear bgp neighbor on router reflector.
1293014 traffic drop during NSR switchover for RSVP P2MP provider tunnels used by MVPN When next-generation MVPN is configured with RSVP provider tunnels and NSR is used, then the egress router for the tunnel might not correctly replicate some of the tunnel state to the backup Routing Engine, leading to temporary traffic loss during NSR failover for the affected tunnels.
1298175 L2TP subscribers might get stuck in terminating state during login. Layer 2 Tunneling Protocol (L2TP) and L2TP access concentrator (LAC) subscribers might get stuck in terminating state because of the race condition during login.
1298612 MX platforms may display false positive CB alarm "PMBus Device Fail". MX platforms may display false positive CB alarm "PMBus Device Fail".
1299580 The traffic in P2MP tunnel might be lost when NG-MVPN uses RSVP-TE In the case of NG-MVPN (Next-Generation Multicast VPN) and RSVP-TE (Resource Reservation Protocol Traffic Engineering) are configured at the same time, the traffic in P2MP tunnel might be lost if NG-MVPN has more than one routing instances on router.
1300716 Interfaces might go down when PFE encounters "TOE::FATAL ERROR" Interfaces might go down when PFE (Packet Forwarding Engine) encounters "TOE::FATAL ERROR" (TOE is a module in PFE, the fatal error can be caused either by software issue or hardware issues like memory parity errors or others). Please reboot the line card to recover the service when hitting the issue.
1300989 Condition based policy fails to take action even though condition is matched When the policy condition configurations are used in export policy in BGP add-path scenario, condition based policy fails to take action even though condition is matched.
1303459 Fan speed changes frequently on MX Series after an upgrade to JUNOS software with the change introduced by PR:1244375 On routers with XM-chip based line cards (e.g., MX platform with MPC3E/4E/5E/6E/2E-NG/3E-NG), log messages might report fan speed changes between full and normal speed continuously, due to XM-chip reaches a temperature threshold.
1305284 Dfwd might crash during execution of "show firewall templates-in-use" command In a subscriber-management environment, dfwd process might crash during execution of "show firewall templates-in-use" command if a CLI session disconnects before the complete output of this command is received.
1306930 The RSVP node-hello packet might not work correctly after the next-hop for remote destination is changed An unexpected error such as an RSVP authentication failure, or an RSVP node-hello packet is rejected when the next-hop for remote node's loopback is changed. 
1309288 PFE error messages are flooding as "expr_sensor_update_cntr_to_sid_tree" after delete and rollback "protocols isis source-packet-routing node-segment " This problem would occur when we do delete and rollback protocols isis source-packet-routing node-segment. This could lead to router streaming incorrect counter values for SR stats
1312117 The rpd process might crash if LDP updates the label for BGP route When LDP egress-policy is configured for the BGP route and a label is received for a BGP route in inet.0 table from LDP, if BGP receives a new label for the same BGP route matching the LDP egress-policy, rpd might crash because of updating the new label.
1312336 PEM alarms and I2C Failures are observed on MX240/MX480/MX960/EX92/SRX5K series On MX240/MX480/MX960/EX92/SRX5K series, PEM alarms and I2C Failures with PCF8584 are observed.
1315009 The L2TP LAC might drop packets that have incorrect payload length while sending packets to the LNS On all MX-Series platform, if the Point-to-Point Protocol over Ethernet (PPPoE) subscribers runs on Layer 2 Tunneling Protocol (L2TP) Access Concentrator (LAC) over dual-tagged VLAN and auto-sensed VLANs, all the packets that are being sent to the L2TP Network Server (LNS) might be dropped, because the LAC Ethernet pads the PPPoE packets with larger size.
1315207 Service Interim Missing for Random Users in JSRC scenario Service Interim Missing for Random Users in JSRC scenario
1315577 MX10003 : Despite of having all AC low PEM alarm is raised. Alarm is raised if Mixed AC PEMs are present. Changed the criteria to check whether mixed AC is present.If the PEM is AC(HIGH) first bit of pem_voltage is set and if it is AC(LOW) second bit of pem_voltage is set.So if both first and second bit is set then MIXED AC is present
1316192 The FAN speed might frequently keep changing between normal and full for MX platform On MX platform with MPC cards, frequent FAN speed change might be seen.
1317011 Log messages "L2ALM Trying peer/master connection, status 26" is showed on SRX device Fix for internal L2ALM connection on SRX5K between IOC cards and RE. It will prevent repeating of following log message "L2ALM Trying peer/master connection, status 26."
1317019 The PPPOE subscribers might encounter connection failure during login In Point-to-Point Protocol over Ethernet (PPPOE) subscriber environment, If one subscriber logs in with incorrect radius attribute(such as Framed-IP-Address, Framed-IPv6-Prefix, Delegated-IPv6-Prefix attribute is logically 0; Framed-IP-Address = 255.255.255.254) and then logs out, all the subscribers on the same Packet Forwarding Engine (PFE) might not be able to reconnect.
1317023 lsdb entry cleanup may cause rpd crash, if loop free alternative is configured When isis database is cleaned, rpd crash may be observed if loop free alternative is configured. isis database can be cleaned even when isis is deactivated.
1317132 The policy configuration might not be evaluated if policy expression is changed If Border Gateway Protocol (BGP) import policy is configured with a policy expression, the configuration might not be evaluated after the policy expression is changed later.
1317223 The output from "show configuration <> | display json" might not be properly enclosed in double quotes If the output from "show configuration <> | display json" contains alpha-numeric (like 10m, 512k etc) or wildcard (like <*>), and the alpha-numeric or wildcard represents a number, they might not be enclosed in double quotes.
1317536 The rpd might crash after the primary link failure of link protection If there are some LSPs for which a router has make link protection available and when primary link failure is caused by FPC restart, this core may occur.
1317542 Multicast traffic is not forwarded on the newly added P2MP branch/receiver Multicast traffic is not forwarded on the newly added P2MP branch/receiver due to Multicast indirect NH and alternate forwarding NH (snooping route) go out of sync after receiver is leaving the group.
1317623 The inactive route cannot be installed in multipath next-hop after disabling and enabling the next-hop interface in L3VPN scenario In some circumstances, a route from a BGP peer in a VRF may have an incorrect multipath attribute.
1318476 The rpd might crash when the link flaps on an adjacent router The rpd (Routing Protocol Process) might crash during heavy next hops churn.
1318528 The daemon bbe-smgd may crash after performing GRES In subscriber management scenario with Point-to-Point Protocol over Ethernet (PPPoE) configured, bbe-smgd may crash if performing graceful routing engine switchover (GRES) during PPPoE subscribers login. This is a timing issue and only part of the subscribers may get synced to the standby RE in this case.
1318677 FPC crash on configuration change for PFE sensors On receiving a configuration change for PFE sensors in the middle of a reap cycle there is a chance that the PFE might crash due to invalid data access. This is a timing issue and related to the length of time it takes to reap the sensors.
1319338 ISIS might choose a sub-optimal path after the metric change in ECMP links On a busy system when ISIS interface metric configuration is changed for ECMP links, ISIS might choose a sub-optimal path instead of the best path. The issue will clear itself if a full LSP (Link State PDU) re-generation (e.g. LSP refresh is triggered because of LSP aged or clear ISIS database) happens.
1320254 2-3 secs packets loss is seen every 5 mins on Junos Fusion On Junos Fusion Enterprise/Provider Edge platforms with feature dot1.x is configured, if the FPC has no interface as cascade port on Aggregation Devices (ADs), 2-3 secs packets loss might be seen every 5 minutes.
1320585 Move XQ_CMERROR_XR_CORRECTABLE_ECC_ERR to minor and re-classify remaining XQCHIP CMERROR from FATAL to MAJOR The default severity of the correctable ECC errors on MX Series routers with MPC2E NG Q, MPC3E NG Q, or MPC5E has been changed from Fatal to Major. This helps in avoiding instances of line card restart caused by Fatal errors, thereby preventing any potential operational impacts for users.
1320880 PPP inline keepalive does not work fine as expected when CPE aborts the subscriber session For DSL (Digital subscriber line) subscribers such as PPPoE (Point-to-Point Protocol over Ethernet), when a CPE (customer premises equipment) device is administratively powered off, the BRAS (Broadband Remote Access Server) terminates the subscriber as expected upon the expiry of configured PPP LCP (Link Control Protocol) keapalive value. However, in a scaled scenario, a few subscriber sessions remain active even the keepalive has expired, due to which the same CPE (client) cannot reconnect unless the former sessions are cleared/deleted from the server or the client waits for extended amount of time to make sure the server internally clears those sessions.
1321122 The traffic with more than 2 VLAN tags might be incorrectly rewritten and sent out On MX with MPC1E/MPC2E/MPC 3D 16x 10GE/MPC3E/MP4E, EX9200 switch or T4000 with type 5 card, if the interface is configured with input-vlan-map option, then the traffic with more than 2 VLAN tags might be incorrectly rewritten and sent out, then it will cause the traffic to be dropped.
1321952 The rpd might crash due to memory leak in RSVP scenario When make-before-break (MBB) such as re-optimization, auto-bandwidth and interoperate with older releases happens in RSVP scenario, the rpd might crash.
1323256 Commands "show chassis environment pem" and "show chassis power" do not show 'input voltage' correctly. On SRX5K devices, DC PEM is used on the box, the output of "show chassis environment pem" and "show chassis power" commands do not show DC input value correctly.
1325271 MPC cards might drop traffic under high temperature When some specific MPC cards (MPC3E/4E/5E/6E/2E-NG/3E-NG) work under high temperature (around 67C or higher), XM-DDR3 memory refresh interval will be reduced and hence DDR bandwidth and Packet Forwarding Engine (PFE) forwarding capacity will be reduced. As a result, traffic might get dropped.
1326584 On SRX5400, SRX5600, and SRX5800 devices, SPC2 XLP stops processing packets in the ingress direction after repeated RSI collections. SRX5400/5600/5800 platforms using SRX5K-SPC-4-15-320 (SPCII) may encounter a XLP buffer leak during Request Support Information (RSI) data collections, which resulting in intermittent packet loss or complete loss of ingress packets. 
1326899 The rpd process might crash continuously on both REs when "backup-spf-options remote-backup-calculation" is configured in ISIS protocol If the knob "backup-spf-options remote-backup-calculation" is being used for remote loop-free alternate (LFA) backup path in Intermediate system to Intermediate system (ISIS) protocol and some routes have both IP and label-switched path (LSP) backups, the rpd process might crash continuously on both master Routing Engine (RE) and backup RE.
1327723 The MAC might not be learnt on MX Trio-based card due to the negative value of the bridge MAC table limit counter The MAC might not be learnt on MX Trio-based card due to the negative value of the bridge MAC table limit counter.
1327724 The packet might get dropped in LSR if MPLS pseudowire payload does not have control word and its destination MAC starts with '4' When the label-switching router (LSR) works on MX Series with MPCs/MICs platforms or vMX and LSR carries MPLS pseudowire (such as l2circuit(LDP based)/l2vpn(BGP based)/VPLS) traffic, the packet might get dropped if the MPLS pseudowire payload does not have control word and its destination MAC starts with '4'.
1327904 Multiple next-hops may not be installed for IBGP multipath route after IGP route update Multiple next-hops may not be installed for an internal BGP(IBGP) route received from a multipath-enabled peer when an active IBGP route from a non-multipath-enabled peer is changed to a new active route from a multipath-enabled peer due to interior gateway protocol(IGP) route update.
1328570 Directories and files under /var/db/scripts lose execution permission or directory 'jet' is missing under /var/db/scripts causing "error: Invalid directory: No such file or directory" error during commit On MX10003, MX150, MX204, MX240/480/960 with RE-S-X6-64G, MX2010/MX2020 with REMX2K-X8-64G, PTX1000, PTX10008, PTX10016, QFX10000, QFX5200, SRX1500, SRX4100, SRX4200 platforms: execution is denied when running automation script stored in Junos automation folder(/var/db/scripts) or directory 'jet' is missing under /var/db/scripts causing "error: Invalid directory: No such file or directory" error during commit.
1329013 With BGP/LDP/ISIS configurations, deleted ISIS routes may still be visible in RIB With BGP/LDP/ISIS configurations, deleted ISIS routes may still be present in the RIB The PR does not affect or have any impact on route selection or other functionality of RPD. Just that deleted ISIS routes don't get removed with specific configurations.
1330150 Not all CSURQ replied Not all CSURQ messages are replied in case the number of sessions addressed in CSURQ is more than about 107. 
1331185 The dcd process might crash due to memory leak and causing commit failure In some situations, like multiple commit in a short time with scaled configuration, dcd memory leak might occur. This could cause commit to fail.
1332153 Router hits db prompt at netisr_process_workstream_proto Due to an issue with a lock protected variable of netisr queue and if rate limiting also kicks in, the count of remaining packets in netisr queue becomes wrong. This leads to kernel crash or db prompt.
1333265 The subinfo process might crash and it might cause the PPPOE subscribers to get disconnected On MX-Series platforms with a Point-to-Point over Ethernet (PPPoE) subscriber environment, in order to increase overall system performance of subscriber accessing, after optimizing the Session Database (SDB) using Short Term Storage (STS) cache, the subinfo process might crash and might cause the SDB of MX subscriber to experience a down event. As a result, the PPPOE subscribers might get disconnected from the MX.
1333380 The log messages file is filled with message "node*.fpc*.pic* Status:1000 from if_np for ifl_copnfig op:2 for ifl :104" On all SRX Series devices running with Junos OS Release 17.4R1 or onwards, the log messages file is filled with message "node*.fpc*.pic* Status:1000 from if_np for ifl_copnfig op:2 for ifl :104" and "node*.fpc*.pic* IFL: Error:1000 while changing IFL 104 index to UP".
1335319 BGP sessions get stuck in active state after remote end (Cisco) restart the device In BGP (Border Gateway Protocol) environment, BGP sessions get stuck in active state after remote Cisco router restart or update the device.
1335486 Log "No Port is enabled for FPC# on node0" genereated every 5 seconds. Since 12.3X48-D55 on SRX5K, the below unnecessary log is observed in chassisd log every 5 seconds. "No Port is enabled for FPC# on node0" The log is removed in 12.3X48-D70 & 15.1X49-D140.
1335914 The rpd process memory leak is observed upon any changes in VPLS configuration like deleting/re-adding VPLS interfaces In Virtual private LAN service (VPLS) scenarios, any changes in VPLS configuration like deleting/re-adding VPLS instances or deleting/re-adding VPLS interfaces might cause the rpd process memory leak. The memory leak rate is 14 bytes per VPLS interface.
1336207 PTX device may get to abnormal state due to the malfunction of the protection mechanism for F-Label On 16.1 onwards, PTX device may get to abnormal state due to F-Label exhaustion. The protection mechanism for warning and protecting F-Label exhaustion malfunctions on these releases after network churn.
1336946 Configuring "lldp neighbour-port-info-display port-id" doesn't take any effect When configuring ""lldp neighbour-port-info-display port-id", supposedly we should see interface's name on "Port's Info" when we do "show lldp neighbor", but it does not take effect on certain software versions.
1340264 The MX10003 MPC off-line button is not effective Off-line button to bring an MPC off-line does not work.
1340612 PTX FPC Link down after router reboot or flap In a rare case on PECHIP based PTX FPCs DFE tuning can end up with port staying down 
1341336 The rpd crash might occur when receiving BGP updates From Junos 16.1R1 onwards, there might be a mismatch in the length of BGP update message between BGP main thread and I/O thread when receiving BGP updates. If this issue happens, rpd crash might be seen.
1342481 The rpd may crash when BGP flaps When EBGP peer connections with labeled-unicast capability flap if a newly received label information is the same as an existing route, the Routing Protocole Daemon may restart unexpectedly.
1344732 PTX1008: 30-Port Coherent Line Card (DWDM-lC) does not come up Applicable to only 17.4R1-S2: PTX1008 30-Port Coherent Line Card (DWDM-LC) will not come up in the release 17.4R1-S2
1345275 SRX1500 devices may encounter a failure accessing SSD drive SRX1500 devices may encounter a loss in reading/writing access to SSD drive due to an incorrect calculation error during read/write operations with SSD firmware version 560ABBF0.
1345365 [ EX9208] / [ 17.4R1.16 ] Dot1x Re-authentication issue During the authentication process the VOIP phone mac is added to both data and voice VLAN. Later the VOIP phone will be sending tagged frames out over voice VLAN only. Now the mac entry in the data VLAN will get Age out and that process will trigger the delete of that MAC in data VLAN. DOT1x process was not comparing the MAC state learnt on both the data and voice VLAN and later when re-authentication triggered it finds MAC Ageout and clears the dot1x session. This is a bug and will be fixed in the next release.
1345519 The rpd might crash if the IRB interface and routing instance are deleted together in the same committing On all MX Series platform with Ethernet VPN (EVPN) scenario, the rpd might crash if the Integrated Routing and Bridging (IRB) interface and routing instance are deleted together in the same committing operation.
1345882 Summit 3RU: MAC address of multiple interfaces are found to be duplicate. Duplicate MAC address seen on interfaces on different PIC
1346054 Summit: Routing engine Model changed from JNP10003-RE1 to RE-S-1600x8 There is a change in RE model for MX10003 and MX204, these changes will be shown in show chassis hardware and show chassis routing-engine.
1347250 When in hadrware-assited-pm-mode and pm config is scale, deativate eth-oam can lead to fpc crash When eth-oam is deactivated with scale PM config (under hardware-assited-pm-mode), the FPC can become unstable and can lead to FPC core
1348089 EVPN-VXLAN: MX: Output policing action does not work on irb interfaces for VNIs Output policing action for EVPN VXLAN may not be applied to an interface despite configuration on the irb interface.
1348607 The rpd might crash while restart routing or deactivate ISIS In Intermediate System-to-Intermediate System (ISIS) segment routing environment with the mapping-server feature enabled, rpd might crash while restart routing or deactivate ISIS configuration. The rpd will recovery itself.
1348753 Chassisd memory leak issue on MX10003 and MX204 platform and it would cause eventual RE switchover and crash. Chassisd process running on MX10003 and MX204 platform will be leaking memory. Memory leak happens as long as chassisd is working and there is no way to stop leaking. This would cause eventual RE switchover and chassisd crash.
1349228 The mspmand process might crash when executing "show services nat deterministic-nat nat-port-block" command With Network Address Translation (NAT) configured on MS-MPC/MS-MIC, if a NAT rule is configured with multiple terms and the first term has 'no-translation' type configured, executing 'show services nat deterministic-nat nat-port-block' command might cause the mspmand process crash.
1351203 pfed process consuming 80-90% cpu running subscriber management on PPC based routers pfed process consumes high cpu on PPC based routers running subscriber management. This includes MX5-MX80 and MX104.
1353111 "Chassis Manager Daemon - chassisd" memory leak Memory leak in chassisd

Saturday, 5 May 2018

Hybrid cloud: What it is, why it matters

The cloud enables companies to offload their back-end architecture into remote, virtual environments. Besides freeing up physical space that would otherwise be used to house server racks, the cloud allows organizations to hand off the responsibilities of setting up, hosting, and scaling back-end architecture to third parties like Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Heroku, Rackspace, Cloudstack, and others. If you’re a brand new startup, this means you can forego the time and expense of setting up a traditional data center. If you’re an established enterprise, it gives you an opportunity to streamline your fragmented and siloed data operations so that your onsite computing power can be devoted to mission-critical problems.

That said, for all its benefits, the cloud isn’t for everyone. The convenience of the public cloud comes with costs, including a reduction in data security and increased latency. For many organizations, particularly those in regulated industries, like finance, or those that require high-performance, low-latency connections for certain functions, the public cloud isn’t a viable option. Many of these organizations have switched to private clouds, which allow them to enjoy some of the benefits of cloud computing without compromising security or performance.

Increasingly, organizations like these are looking to a third option: the hybrid cloud. In this article, we’ll explore what hybrid cloud solutions entail, how they compare to the public cloud, and who you need to set one up.
CLOUD COMPUTING REFRESHER
If you’ve ever used Dropbox for storage, Salesforce for CRM, ADP for payroll, or Gmail for communication, then you’ve used a cloud service (also known as a web service). What ties these different services together is that they don’t require the installation of software (though some services might have standalone apps). Rather than storing data on your own servers and running applications using your own resources, you access cloud services over the internet. The service providers are responsible for building specialized data centers that support their particular service for their clients.

While the above examples are generally considered examples of software-as-a-service (SaaS), there are other cloud services that replace many of the functions of a traditional data center. Amazon Web Services (AWS) is considered an example of infrastructure-as-a-service (IaaS), offering virtual access to storage, computing, scaling, and backup solutions. Additionally, platform-as-a-service (PaaS) providers supply a dev environment, server, and database. What all these cloud services have in common is that they fulfill roles traditionally handled by an on-site data center.

What if your business needs prevent you from using the public cloud? In these cases, many companies opt to create their own private clouds, implemented and managed by their IT departments. These private clouds are fire-walled behind the company’s network, meaning that sensitive information isn’t stored on the public internet.
What Is the Hybrid Cloud?
At its most basic, a hybrid cloud joins together a public and private cloud with an encrypted connection and technology that makes data portable. The key here is that both clouds remain separate, independent entities while also having one or more touch points in common. A hybrid cloud is not the same as simply relying on cloud services for some functions and a private cloud for others.

For some organizations, a hybrid cloud represents an intermediary step between their old on-site data storage and processing setups and transitioning entirely to the public cloud. For others, hybrid cloud solutions enable them to leverage the scalability of cloud computing while maintaining the integrity of their data and ensuring compliance with regulatory mandates and compliance standards.
Architecture of Microsoft hybrid cloud scenarios
Figure 1 shows the Microsoft hybrid cloud stack and its layer, which include on-premises, network, Identity, apps and scenarios, and the category of cloud service (Microsoft SaaS, Azure PaaS, and Azure PaaS).

The Apps and scenarios layer contains the specific hybrid cloud scenarios that are detailed in the additional articles of this model. The Identity, Network, and On-premises layers can be common to the categories of cloud service (SaaS, PaaS, or PaaS).

On-premises
On-premises infrastructure for hybrid scenarios can include servers for SharePoint, Exchange, Skype for Business, and line of business applications. It can also include data stores (databases, lists, files). Without ExpressRoute connections, access to the on-premises data stores must be allowed through a reverse proxy or by making the server or data accessible on your DMZ or extranet.
Network
There are two choices for connectivity to Microsoft cloud platforms and services: your existing Internet pipe and ExpressRoute. Use an ExpressRoute connection if predictable performance is important. You can use one ExpressRoute connection to connect directly to Microsoft SaaS services (Office 365 and Dynamics 365), Azure PaaS services, and Azure PaaS services.
Identity
For cloud identity infrastructure, there are two ways to go, depending on the Microsoft cloud platform. For SaaS and Azure PaaS, integrate your on-premises identity infrastructure with Azure AD or federate with your on-premises identity infrastructure or third-party identity providers. For VMs running in Azure, you can extend your on-premises identity infrastructure, such as Windows Server AD, to the virtual networks (VNets) where your VMs reside.
WHY GO HYBRID?
Now that we’ve covered what the hybrid cloud is, what are its advantages?

Flexibility. The main reason organizations adopt the hybrid cloud approach is that it gives them maximum flexibility to explore new products and business models. If your business needs are continually changing, your development team can benefit from having a private environment on which to build and test new software without having to dramatically rearrange your IT resources and architecture.
Security. Protected, confidential, and sensitive information can be stored on a private cloud while still leveraging resources of the public cloud to run apps that rely on that data. This is especially important for businesses that store sensitive data for their customers. (Think health care providers and payroll processors, for example.)
Stability. Even the biggest and most reliable cloud service providers have downtime. By keeping certain functions accessible and on-site, organizations insulate themselves from network failures. Another concern (currently hypothetical) involves the erosion of net neutrality, which could lead some ISPs to throttle speeds for certain traffic-intensive sites and services. For services that require an extremely high degree of availability (like social networks), ensuring stability is a major consideration.
Reduced latency. For certain high-speed functions, it’s impractical to run apps in the public cloud. Keeping some processing jobs on-site allows businesses to allocate their computing resources more effectively. Financial firms that handle high-volume trades and businesses that rely on real-time analytics are two examples of organizations that could benefit from keeping certain functions on a private cloud.
Cost effectiveness. As IT’s role has grown, so too have the demands placed on the data center. When data centers are forced to do too many things, efficiency suffers. You could invest money in upgrading your computing or storage, but why not offload the non-essential tasks onto a cloud-storage system? That way, you can dedicate your on-site resources to your most important tasks.
GETTING YOUR CLOUDS TO COMMUNICATE
The main feature of hybrid cloud environments is that applications and services that operate across different systems have to be able to exchange data. Each of these systems may have its own rules about how data can be stored and moved, based on business rules, regulatory mandates, and technical specifications. In order to achieve the efficiencies and savings of the hybrid cloud, these different workloads—which can include daily batch processes, real-time transactions, high-performance analysis, and more—need to behave as if they’re part of a single, unified system.

One of the best ways to integrate different cloud environments is via APIs. APIs allow a piece of software to connect with another piece of software without needing to access the underlying code. It does this via abstraction, presenting just the rules and interfaces needed to connect to the service. For hybrid cloud environments, abstraction provides another key advantage: It controls exactly what parts of your system are visible to outside developers. This way, you can protect the integrity of sensitive data on your private cloud while still allowing web services to access it as necessary.

All the major cloud service providers supply their own APIs to allow their customers to build workloads that take advantage of cloud storage and computing services. However, these APIs may require substantial programming in order to get them working with your system, and in the case where you’re using multiple cloud services, you should expect competing cloud APIs to be incompatible. Services like RightScale, Scalr, IBM WebSphere Cast Iron, and Morpheus provide a further layer of abstraction via templates and management tools that finesse these different APIs in order to integrate workloads.
SOME EXAMPLES OF HYBRID CLOUD ENVIRONMENTS
No two hybrid cloud setups are the same. Here are a few ways different organizations can take advantage of the hybrid cloud.

An e-commerce site relies on Salesforce in the public cloud to manage its customer relationship management (CRM) functions while also using a private cloud to test and build new analytics products based on that data.
A parts manufacturer relies on a private cloud to collect and analyze billions of points of data coming in from IoT sensors but also needs to enable customers on the public cloud to see real-time order-status updates that depend on that sensor data.
A major health care provider needs the ability to compartmentalize patient data in compliance with HIPAA while also enabling patients the ability to access some of their information through the provider’s web app.
A video-streaming service does not have the computing power on-site to handle weekend binge-watching. During these high-traffic periods, the company can “burst” some of their processes onto a public cloud service to ensure availability even as their traffic spikes.
THINGS TO CONSIDER WHEN MOVING TO A HYBRID CLOUD
Moving to a hybrid cloud can save money and make your organization more efficient and agile. That said, changing your IT infrastructure can be a complex and expensive undertaking. Before moving to a hybrid cloud, you should carefully weigh your options and make sure you have the personnel, resources, and time to make the switch.

Setup and customization. Who’s responsible for making sure that your web services are properly connected to your in-house operations? Integration can be a tricky and time-consuming process, so make sure you’ve allocated time for customization and testing.
Data transfer. Organizations should expect to incur a fee when moving their data onto the cloud, especially if there are large amounts of it.
Management. How will you manage your hybrid cloud environment? Especially when your workloads are abstracted from the hardware they run on, it’s critical to make sure that resources are efficiently assigned based on business needs and availability. Modeling out your workloads should give you some sense of how much CPU, disk, and memory resources are needed.
Storage and maintenance. Think carefully about how your data could grow. Are you a Big Data company that needs an extremely scalable storage solution? Or do you just need access to resources during peak times?
Compliance. If you’re in a regulated industry or handle sensitive data regularly, you’ll probably want to audit your cloud service to make sure it meets your specific needs.
Workloads. There are many different types of workloads, some of which are better suited to the cloud than others. Are you running batch workloads that can run in the background or overnight over the public cloud? Or do you need high-powered real-time analytics workloads that require all the computing power in your data center? The answer will help inform how you set up and manage your hybrid cloud.
Load balancing. In distributed computing environments, load balancing ensures that no single machine gets overwhelmed with requests. Typically, the load balancer sits in front of the servers and uses an algorithm to distribute workloads efficiently.
loading...