We are in the process of migrating this forum. A new space will be available soon. We are sorry for the inconvenience.

2011 routing


zydron
17-11-10, 13:06
Good evening,

Some news in vRack on routing:

- If you recall, the output of our data centers,
we thought initially to use the Cisco Nexus 7016. After
production testing we found that it is not
stable enough for us to do the routing. Very good for
the switching. But not quite developed in terms of routing.
So we had a very bad time, because our plans
fell into the water. So we ordered more speed
routers ... Cisco, Juniper and Brocade's.
We had very little time, and therefore the first
router that we received was the Cisco ASR 9010. We
tested it and it works (very) well. We had decided not
to try launch at 14h and then we set
production late last week. You can see
the backbone of our weathermap: Asr-g1-a9
http://weathermap.ovh.net/backbone
Yes, it clings to understand ... The advantage of this
router is that it is scheduled to deliver a
high availability service. 320Gbps capacity
and passing 150Gbps. Cool.

Without waiting we ordered a second that we hope will
arrive in late December. This will allow us to
implement a second backbone parallel to the present.
2nd routers backbone are already in place to
Amsterdam, ams-5-6k was set up in Frankfurt and we
have added fra-1-6k. London was a little late as
Global Switch has only one rack ... We seek a
solution. But it will also end with a new
router. These 3 routers will help us to double
AMSIX capacity to 80Gbps, and 80Gbps for DECIX. We must
look for LINX shortly. And to connect all this to
Roubaix, we increase the transmission capacity between
Roubaix and Amsterdam to 120Gbps and 140Gbps to Frankfurt.
London will rise to 100Gbps. Then we will add in
February 40 Gbps more. In short, redundancy and
high availability.

All this to say by late December /early January we
should finish this phase of the upgrades.

- Following the issues with Vss Ovh in 2009/2010, we
have decided to change policy and to specialize
routers. Thus, we set up 8 new routers
go... (Rbx-s1/2/3/.../8) and 4 new are planned.
Thus, we will be able to better manage the network quality
through an infrastructure that we used to
Roubaix1: 2 routers work. High availability.

- We are trying to upgrade the network to a HG
high availability network through 10G cable to
the servers. There are still three months for a job. We
will change the offer officially to 100% HG
availability. Truth that will be 100% insured
thanks to very new technology that we
had the joys of debugging throughout early 2010 and
we set up on shared hosting to
Connect 6000 servers This allowed us to announce
Unlimited traffic on the shared hostings...
So these facilities are known by
heart and we will be able to offer 100% availability...
This is our 2011 target: 100% deals.

- In parallel, we launched a large investment in
the privateCloud. We built new rooms in Roubaix 2
where we will host in late December 4000 servers for this
business and by the end of February 8000 servers. We use these same
routing technologies and switching that will allow us
to offer the 100% availability on offers of PCC
(Private Cloud Computing). The offer is in alpha testing internally.
For the beta, we preferred to delay the commissioning of the CCP
to mid December, the time to receive and mount the 4000 servers,
then connect all these servers with 2 cables to a minimum
full configuration of switch that are double connected
again in high availability on aggregation switches
and end on several routers that work in
parallel configurations in active / active / active / active.
In short, it made no sense to do a beta with 100 or 200
servers. Because the supply seems to us Extremely interesting and we
want you to discover them without being stingy with
"The CCP free 5 days." So this requires "a few" servers.
4000 machines with 4.8, 16 or 48 cores with 16, 32, 64,
RAM 128GB + NAS-HA ... etc it should do the trick ...
because if the interface you click "add new server"
and delivery is> 1 minute it is not the CCP ...

5 paragraphs ... I stop there

Regards
Octave

oles@ovh.net
17-11-10, 00:52
Good evening,
Some news on vrack on routing:

- If you remember well, the output of our datacentres, were initially going use the Cisco Nexus 7016. After production testing we found that it is not stable enough for us to do the routing. Very good for switching but not quite developed in terms of routing. So we wasted time on this, because our plans were thrown out. We then ordered speeds from other routers ... Cisco, Juniper and Brocade's. As we have said that we had very little time, the first router that we received was the Cisco ASR 9010. It was tested and it works (very) good. We decided not to take a break and went into production late last week. You can see the backbone of our weathermap: Asr-g1-a9 http://weathermap.ovh.net/backbone Yes, we are grappling to understand ... The advantage of this router is that it is scheduled to deliver high service availability. It has a 320Gbps connection capacity and passed 150Gbps. It is not connected... Cool.

Without waiting, we ordered a second that we will arrive in late December. This will allow us to implement a second backbone parallel to the present. The 2nd routing backbone is already in place in Amsterdam ams-5-6k was set up in Frankfurt and we added the fra-1-6k. In London we were a little late for Global Switch has no longer has a rack ... We seeked a solution. But it will also end with a new router. These 3 routers will help us to double the capacity to 80Gbps and AMSIX DECIX to 80Gbps. We must look for LINX shortly. To connect all this in Roubaix, we are increasing the transport capacity between Roubaix and Amsterdam to 120Gbps and to Frankfurt 140Gbps . London will rise to 100Gbps. Then in February we will add 40 Gbps again. In short, redundancy and high availability.

All this is planned for late December / early January when we should be finished this part of the upgrades.

- Following issues from Vss Ovh 2009/2010, we decided to change the policy and customise the routers. Thus, we set up 8 new routers running ... (Rbx-s1/2/3/.../8) and 4 new to come. Thus, we will be able to better manage quality through a network infrastructure that we have used in Roubaix1: 2 routers are running with high availability.

- We are trying to upgrade the network of HG to a high availability network through 10G cable on the server. There are still three months left for the a job. This will allow us to pass the HG offer at officially to 100% availability. A real 100%, which will be provided through the latest technology which we had the joy of debugging throughout early 2010 and we have now implemented on shared hosting to connect 6000 servers This allowed us to announce the unlimited traffic on shared hosting ... So we know this equipment and can say it is the proposed 100% uptime. ... This is our 2011 target: 100% offers.

- In parallel, we launched a large investment in privateCloud. We constructed new rooms at Roubaix 2 where we will host in late December 4000 servers for this job and the end of February 8000 servers. We use these same routing and switching technologies that allow us to give 100% availability on PCC (private Cloud Computing) offers. The offer is in internal alpha testing. For the beta, we preferred to delay the commissioning of the PCC until mid December, the time to receive and mount these 4000 servers and then connect all these servers with to a minimum of 2 cables to full switch doubled HA configuration which are connected again at High availability on the aggregation switches and end on several routers that work in parallel configurations in active / active / active / active. In short, it made no sense to do a beta with 100 or 200 servers because the supply for us is interesting and we hope you get to know this without being stingy with "Free PCC for 5 days." So it requires "a few" servers. 4000 machines with 4.8, 16 or 48 cores with 16, 32, 64, 128GB of RAM + NAS-HA etc ... . which should do the trick ... because in the interface if you click "add new server" and delivery is > 1 minute it is not PCC ...

5 paragraphs ... I will stop there

All the best