Ir para o final dos metadados
Ir para o início dos metadados

Ticket aberto com a Engenharia [RNP-NOC #264617]

Resumo da Solicitação

• Demanda: Configuração de Dois Circuitos L2 + conexão do HEPGRID (UERJ) a Switch Brocade OpenFlow no PoP-RJ p/ Demonstração de TCP Multipath via Circuitos Aprovisionados Dinamicamente através de Aplicação SDN/OpenFlow (PheDEx)
• Requisitos: 2 Circuitos WAN entre RJ e Miami com largura de banda entre 1Gbps e 10Gbps cada. 
• Prazo para implementação: Semana de 10/11
• Data do evento (e demonstração): 17/11 a 20/11
a) Solicitamos configuração de dois circuitos (VLANs) WAN em camada 2 para participação do HEPGRID (UERJ) na Demo do SC14, coordenada pela Caltech.
Endpoints: 
  • Ponta A: Portas 8:2 e 8:3 do BD 8810 do PoP-RJ 
  • Ponta B: VLANs no XMR-8K do NAP SP
b) Interligação dos circuitos com VLANs fornecidas pela AMPATH p/ transportar o tráfego até o local do evento SC14 em Nova Orleans, EUA.
c) A Brocade Brasil estará fornecendo empréstimo de switch NetIron com interfaces 10G e 1G e suporte OpenFlow 1.0 para instalação no PoP-RJ 
d) O PoP-RJ sinalizou que não dispõe de duas (02) XFPs ZR para conectar o switch Brocade NetIron O.F. em 10GbE e 1GbE ao BlackDiamond 8810.
e) Diagrama com opções de conexão: 

 

         

 

SC14-Brazil.vsd

 Vlan 2770:
  • Equipamento AMPATH: 10.255.254.6/29
  • Equipamento Core SPO: 10.255.254.1/29
  • Equipamento Core RJO: 10.255.254.2/29
  • Equipamento Brocade RJO: 10.255.254.3/29
  • Equipamento Hepgrid: 10.255.254.4/29
Vlan 2771:
  • Equipamento AMPATH: 10.255.255.6/29
  • Equipamento Core SPO: 10.255.255.1/29
  • Equipamento Core RJO: 10.255.255.2/29
  • Equipamento Brocade RJO: 10.255.255.3/29
  • Equipamento Hepgrid: 10.255.255.4/29
Vlan 2772:
  • Equipamento AMPATH: 10.255.250.6/29
  • Equipamento Core MIA: 10.255.250.2/29
  • Equipamento Core RJO: 10.255.250.1/29
  • Equipamento Brocade RJO: 10.255.250.3/29
  • Equipamento Hepgrid: 10.255.250.4/29

 

 

f) Contatos
• Alex Moura / Marcos Schwarz (RNP/DPD/GRE) - Coordenação
• Jeronimo Bezerra  / James Grace (AMPATH/FIU)
• Marcos Buzo (AMPATH/RNP)
• Fábio Rosa (PoP-RJ)
• Eduardo Revoredo (HEPGRID / UERJ)
• Thiago Nascimento da Silva (RNP/DAERO) - configuração de VLANS - ticket [ENG #264617]
__________________________________

Descrição

Durante o SC14, a Caltech está organizando a demonstração de aplicação PheDEx que deverá solicitar via SDN/OpenFlow a criação de circuitos L2 dinamicamente entre os pontos de transferências de dados e pontos no salão do evento SC14 em Nova Orleans, MI, EUA.
Para viabilizar a participação do HEPGRID (UERJ) na demonstração, é necessário instalação de switch com OpenFlow 1.0 no PoP-RJ.
Foi conseguido com a empresa Brocade Brasil o empréstimo de 01 switch NetIron com portas 10GbE e 1GbE para implementação de um dos cenários do diagrama.
Na demonstração o switch Brocade O.F. será controlado por um controlador SDN do SC14 para configurar automaticamente circuitos L2 para transferências de dados de múltiplos endpoints por múltiplos caminhos, usando a aplicação PheDEx.
Para viabilizar a implantação do switch O.F. no acesso do HEPGRID via Redecomep-RJ, o PoP-RJ confirmou:
- A fibra do acesso HEPGRID via Redecomep é monomodo (SM)
- Há disponibilidade de 2 portas — 8:2 e 8:3 — 10GbE no switch BD 8810. 
Para instalação das opções descritas no diagrama faltam alguns componentes. 
A Engenharia dispõe de algum dos itens abaixo para ajudar a completar a implantação de alguma das opções 1, 2 ou 3 de implantação abaixo?
Opção 1 (ver diagrama):
   - Conectar a fibra de acesso do HEPGRID da Redecomep no switch NetIron OpenFlow e depois conectá-lo em 10GbE ao Extreme BD 8810.
     Nesse caso, falta 01 (hum) XFP SR 10GbE ou instalar um atenuador e usar a porta 10GbE XFP ZR atual para a conexão entre o NetIron O.F. e o BD8810.
HEPGRID(UERJ) <—10GbE—>  NetIron(O.F.)  <—10GbE—>  BD8810 (PoP-RJ) <—10GbE—> Juniper MX-480
Consulta 1: A Engenharia dispõe de XFPs que poderia emprestar para o BD8810 do PoP-RJ para implementar a opção 1 acima?
__________________________________
Opção 2 (ver diagrama):
   - Fazer uma “ponte” 10GbE passando pelo Brocade O.F. entre o DB8810 e o MX-480, 
     Nesse caso, faltaria um XFP 10GbE para a conectar o DB8810 ao NetIron em 10GbE e 01 porta 10GbE com SFP+ no Juniper MX-480. 
     HEPGRID(UERJ) <—10GbE—> BD8810 (PoP-RJ) <—10GbE—> NetIron(O.F.) <—10GbE—> Juniper MX-480
Consulta 2: A Engenharia dispõe de XFP que poderia emprestar para o BD8810 do PoP-RJ e 01 porta 10GbE c/ SFP+ para implementar a opção 2 acima?
__________________________________
Opção 3 (ver diagrama): 
   - Fazer 02 licações físicas “downlink” e “uplink” entre o BD 8810 e o Brocade O.F., mantendo as demais conexões físicas inalteradas. 
     Nesse caso, faltam dois XFP SR para conectar duas portas 10GbE no BD8810 ao NetIron O.F.
     HEPGRID(UERJ) <—10GbE—> BD8810 (PoP-RJ) <—10GbE—> Juniper MX-480
                                                              |    |  |
                                         1GbE (UTP) |    |  |  (2x)10GbE
                                                              |    |  |
                                                           NetIron(O.F.) 
Consulta 3: A Engenharia dispõe de 2 XFPs 10GbE que poderia emprestar para o BD8810 do PoP-RJ para implementar a opção 3 acima?
__________________________________
Observação:  No diagrama com os pontos participantes falta a inclusão do HEPGRID no Rio de Janeiro, que será solicitado quando tivermos a confirmação da infraestrutura no PoP-RJ.

Servidores PerfSONAR

Documentação

Etiquetas
  • Nenhum
  1. Oct 31, 2014

    From: "Azher Mughal" <azher@hep.caltech.edu>
    To: "Grant CTR Miller" <miller@nitrd.gov>, "JBDT" <JBDT@nitrd.gov>, "sc14" <sc14@hep.caltech.edu>
    Sent: Friday, September 5, 2014 6:46:54 PM
    Subject: Re: JET Big Data Demonstrations Telecon summary

    I was talking with SC14 fiber team today and they told that CenturyLink (service provider) will be providing several dedicated paths this year:

    2 x 100G to Los Angeles (600W, Caltech)
    2 x 100G to MANLAN (32AOA, manlan switch, for possibility of 100G to Europe/CERN)
    2 x 100G to StarLight (710 N, ESnet->Omipop->UMich and second to Joe's Ciena switch)
    2 x 100G to Seattle (Westin, One will be connected directly to Canarie for UVic)

    In addition:
    Internet2 = 2 x 100GE (East and west , both shared)
    ESnet = ~150G (not sure)

    Cheers
    -Azher

  2. Nov 13, 2014

    Precisamos que as 3 VLANs que farão parte do experimento do SC14 sejam configuradas no switch de borda do HEPGRID:

    VLAN 2770 (configurar IP 10.255.254.4/24 para testar conectividade)

    VLAN 2771 (configurar IP 10.255.255.4/24 para testar conectividade)

    VLAN 2772 (configurar IP 10.255.250.4/24 para testar conectividade)

  3. Nov 16, 2014

    On Sat, Nov 15, 2014 at 6:44 PM, James Grace <jgrac002@fiu.edu> wrote:
    Team,
    All VLANS have been dropped off to machines on the show floor.

    The host is located at: sc14.3115.sc14.org

    Here is the IP schema:

            Management:
            inet 140.221.162.24 netmask 255.255.255.0 broadcast 140.221.162.255

            SPRACE (eth4)
            inet 10.23.70.21  netmask 255.255.255.0  broadcast 10.23.70.255
            inet 10.23.71.21  netmask 255.255.255.0  broadcast 10.23.71.255
            inet 10.23.72.21  netmask 255.255.255.0  broadcast 10.23.72.255

            HEPGRID (eth5)
            inet 10.255.254.2  netmask 255.255.255.248  broadcast 10.255.254.7
            inet 10.255.255.2  netmask 255.255.255.248  broadcast 10.255.255.7
            inet 10.255.250.2  netmask 255.255.255.248  broadcast 10.255.250.7

    I’ve separated the VLAN sets (3xSPRACE and 3xHEPGRID) between two 40Gbs ports. So SPRACE and HEPGRID both have access to 40GBs NICs.  All VLANs are trunked through both ports on the Extreme BGX switch.

    -james

  4. Nov 21, 2014

    From: "Harvey Newman" <newman@hep.caltech.edu>
    To: "sc14" <sc14@hep.caltech.edu>
    Sent: Thursday, November 20, 2014 8:39:13 PM
    Subject: Thanks and Congratulations - a few points remembered


    HI Everyone !

    While the SC14 installation is just starting to be dismantled,
    I wanted to send you all a Big Congratulations !

    It was a magnificent effort this year with some great results.

    Thanks to you all for the unrelenting commitment and sustained work
    to bring this to fruition, with many long days before the conference
    and round the clock work during the conference to make this a success.

    So what happened ? A few points:

    • We sustained 1.4 Tbps during our memory to memory warmups on the first day. The peak was 1.55 Tbps in and out of the Caltech booth.
      • During the warm up we also had 1 Echostreams server doing a total of > 150G in and out of its two Mellanox NICs simultaneously.
    •  Disk to memory we reached a peak of 1.01 Tbps, reaching our goal. We have been able to sustain 940G+ for long periods
    • We sent and received 11 Petabytes of data to and from the Caltech booth. 3.7 Petabytes went over the wide area between New Orleans and Caltech, Victoria, Michigan, Sao Paulo and NERSC.
    • We had sustained flows of very close to 100G in one direction from UVic; and large flows often making up a big fraction of 100G to and/or from Caltech, UMich, and Sao Paulo. There were also new disk to disk and memory to memory records between the Northern and Southern hemispheres.
    • In terms of wide area traffic rates, we reached 370G sustained, and 310G for a long time, with a solid 100G (99.74G to UVic), up to 100G from Umich, and up to 93G from Caltech, 92.5G from the combination of CERN and Sao Paulo (via FLR), and 45G from NERSC.
    • Disk to disk was harder, because of problems which might have been due to the OS and/or what happens when you hit the CPUs as hard as we did. Nevertheless, we reached a peak of 340 Gbps peak and 300 Gbps sustained disk to disk. We will continue to investigate further and resolve the issues with the OS and/or with the controller, CPUs or other potential issues.
    • A lot was learned in terms of putting PhEDEx into larger scale use with circuits, and achieving more deterministic workflow in the large scale. Work on some of the built-in time delays, and perhaps on the central (single instance) database should continue,  as part of getting the new PhEDEx paradigm out in the field.
    • There were also some successes with OpenDaylight, limited in part by the very short time available and the early state of the ODL Hydrogen release. We will move forward with the ODL controller's reactive mode using our own testbed facilities and our Brocade switches at Caltech and elsewhere.

  5. Nov 25, 2014

    Graphs of the traffic - Panorama (www.rnp.br/servicos/conectividade/trafego) and CACTI (PoP-RJ and CEO) - with usage statistics:

    • Panorama ~ RNP Backbone between RJ and SP - period from 2014-11-18 to 2014-11-25:

    • CACTI PoP-RJ ~ UERJ HEPGRID - period from 2014-11-18 to 2014-11-25:

    • CACTI PoP-RJ ~ UERJ HEPGRID - period from 2014-11-18 to 2014-11-21:

    • CACTI PoP-RJ ~ UERJ HEPGRID - period from 2014-11-18 to 2014-11-20:

    • Panorama ~ RNP Backbone between SP and MIA - period from 2014-11-18 to 2014-11-25: