OVH Community, your new community space.

Whats new for september 2011


oles@ovh.net
07-09-11, 11:02
Hello,

So what's new at OVH? We've been hard at work. It will take few days to complete certain phases of development of many (new) services and we will take the opportunity to make several announcements and give feedback.

Otherwise:
- OVH Mag No. 2 is being edited in-house. We stopped at 200 pages for No. 1. So we should have it ready for you at the end of the month.

- Filling the last datacentre RBX4 has been faster than expected. So we're looking at how to deal with this growth so we won't run out of space toward the end of the year. We have several options that are being validated that will pass through innovative and technological breakthroughs. The goal is to have RBX4 move from 35,000 servers to 45,000, 60000 and 75000 servers. We'll see the result of internal alphas. Worst case scenario we'll build RBX5.

- To set up the backbone in the United States and Asia, we ordered a lot of new routers. This equipment has taken (forever) 3-4 weeks to arrive and yesterday we received the last orders. We will now be able to finalise the configuration and then next week we'll start deployments. This will allow us to establish new peering points in the United States and Asia and two new transit (and Level3 NTT), thus increase the capacity of our network to over 250 Gbps. As expected by the end of 2012, we will have > 1Tbps interconnection to the Internet.

- Tomorrow night, we will finalise the NRA backbone in Lille. There will be a break in service of a few minutes that's needed to unplug the old hardware and turn on the new one again.

- Currently in Lille, each DSLAM is connected to a 2x1Gbps switch that is in the NRA itself. This switch is connected by 10Gbps to another switch to another NRA. This results in loops. This type of backbone is fine for ADSL but with the advent of VDSL2 and therefore 100Mbps for each subscriber, we looked for something better. And we found better and cheaper. Thus, the DSLAMs are (still) connected as Xx1Gbps (X = 2,4,8), but directly to two backbone routers. Above the switch in the NRA.
Each is carried on a 1Gbps network in the form of loops. It's used by 4/10Gbps DWDM wavelength. Knowing that there are 5-10 NRA loops, each loop can be 800Gbps and a large NRA has a maximum 10Gbps (the sum of all the ISPs together) we can estimate that building a backbone that will know how to resist innovations for 5-7 years. As each NRA is connected to the router via a route first and the second router via another route, it supports
fibre cuts, router failures. .. with all our experience in hosting ISPs. The result is a guaranteed 100Mbps per subscriber for a lower cost than our competitors today who guarantee 150kbps per user.

It means we will offer you 100Mbps VDSL2 for less than €25 with guaranteed bandwidth 24/7. This is what is already on the hosting for the last few years. The backbone can also deliver value added services such as private networks VPN / MPLS, and of course supports video/VOD/TV in unicast. Apple TV or Google TV or Netflix does not scare us. On the contrary, they are waiting to be delivered to you unlimited ..

- We will start the major work on the backbone in Paris. The goal is to stop the POP to TH1 and move all equipment from TH1 to TH2. Within a few days we will therefore do a migration of the Infinera that guarantees one of the two links Paris/Roubaix and Paris/Frankfurt. We took the opportunity to introduce two new Cisco ASR 9010: one in TH2 and the other at GSW. They will centralise all the current (hosting) and future (the NRA) traffic in Paris, but also Lyon, Bordeaux and Marseille. In all, we're setting up a 1.2Tbps capacity network to Paris and we'll start off this ability to Lyon, Bordeaux and Marseille (on Paris / Lille we already have over 250Gbps).

- The site at Strasbourg is being started and we will cut our connection to Paris / Frankfurt to add Infinera equipment in Strasbourg. This will allow us to have 400Gbps to Paris and 400Gbps to Frankfurt and then connect Zurich then Milan directly to Strasbourg Milan (instead of Frankfurt). Thus, the servers that we propose for the Cloud will be very close to Strasbourg from Paris, Frankfurt, Prague, Warsaw, Zurich, Milan and Vienna. We will divide the latencies by 2 or 3 to these cities and thus to these countries. Then we will offer new DRP services (disaster recovery plan) based on two sites separated by 300 km.

- A few weeks ago, we celebrated 100,000 servers. We removed the cost of installation of all services at OVH. Since then we have a very sharp increase in abuse of all kinds (spam, attacks, scans). That's why we changed our internal processes and for 1 month on dedicated server terminates the dice the second attack in less than 30 days. So we saw the number of attacks increase from 3-4 per week (with installation costs) to 5-8 per day (free installation) then the situation has stabilised for 2 weeks to 5 - 6 serious attacks per week. Basically, we react quickly and ruthlessly to clean up our network of all customers that would be hackers or with strange or borderline activities or who can not maintain adequate security on their infrastructure. With this less friendly approach, we have been able to keep (for now) installation costs to €0.

- In recent months, we have been working on the deployment of our services on North America. As it's a large area, we know that we will have more datacentres. We are studying several sites to launch our first datacentre there. But in contrast to our competitors, the goal is not to go fast regardless of the cost but to go cheap, with large capacities of networks and service that are managed from start to finish by OVH. This approach allows us to be profitable from day one, even if this means we had to build one of the largest backbones in Europe and have to build more datacentres like RBX4. Saying that, this is not necessarily what we had initially thought when we did apt-get install apache .. earlier rpm-Uvh apache-1.3.4-1.i386.rpm. In short, we want to replicate our business model in its entirety because we think there will always be room for a serious player who have mastered their job, which is 2x to 5x less than the market and which does not charge for bandwidth. All this is thanks to the fact that we reinvest all profits into the future.

So where to deploy a datacentre? We think we should stay in the "North" of "North America", ie.: not far from the US / Canada border. Why? Because energy is cheaper. It comes from dams on major rivers in Canada and the United States. It's green, not expensive. Also, the cost of cooling servers is lower because, as it depends on outside temperature and as it is the north it's cool and it costs less. Looking at the fibre-optic networks, distances and latencies, we will have to create about 4-5 areas to 15-20ms from each other. So yes there will be several datacentres. But if you look at the level of natural disasters it's a good thing to be around the border as there are no earthquakes, no tornadoes, no volcanoes. Also it costs less to build and keep it running than in a risk area. One can then imagine then why we would settle down between Albany (New York), Montreal (Canada), Detroit (Michigan), Chicago (Illinois), Seattle (Washington) and / or Portland (Oregon). The notable exception is Texas, which provides cheap energy from oil locally, but where it is warm, there are tornadoes etc.. We continue our research, but it's starting to take shape. If you have any ideas, please let us know. We should be able to ping our 1st servers with each other before continuing to the end of the year.

- 2 years ago, we knew that we needed to go the United States. We have launched internal reflections on how to organize ourselves in order to maintain a high speed of innovation while magnifying and chatting with customers with increasingly different motivations that are sometimes conflicting.
This led us to the creation of an "Interteam" which, across the board puts pressure on internal teams. Through two weeks of sprints this team is trying to pass through the the sysadmin and developer factory the needs of our customers, who have discussed with our customers on the mailing list, forums, twitter and marketing. The goal is to enrich our offerings every two weeks. One or more new features, new services, bug fixes. The goal is to offer services that come up to the real needs of our customers. The alphas, betas and prod, it's our DNA since starting and that's what sets the pace at OVH. Nothing has changed except that in 1999-2001 we felt what needed to be done in 2009-2011 when we were finally able to put the words on these methods. The Interteam is a photo of this very pragmatic organisation that's been tested and validated for 9-12 months. It's been working quite well, even if we have (often) some delays in prodding. However we're advancing the service and we feel OVH is moving. It is reassuring enough to devote myself to the development of OVH on another continent and enrich OVH (in Europe) from another point of view. For those who doubt this is an expression I often say "the graveyards are full of indispensable people" or "it will not be worse or better, but different." And even then, it will not differ. So to work

All the best

Octave