After considering the queries, clarifications, recommendations and suggestions, the BAC4IGOV hereby decides to include, revise, amend, delete and/or adapt the following provisions:

ITEM QUERY BAC4IGOV’S RESPONSE
1 For Admin Node, Ceph Monitor Node and OpenStack Controller Node in Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System, please allow us to use 460W maximum redundant power to instead 400W maximum redundant. Please refer to Item 1.1.2 (page 24), 2.1.2. (page 29) and 3.1.2. (page 33).

Reason: We will reduce other component power consumption in same Rack to meet your power distributor requirement.

OK.

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud

System”,

● Change the power specification of the “Barebone” entry at item 1.1.2.

○ From: 400W maximum Redundant

○ To: 400W (or 460W provided rack power distributor requirement is satisfied)

maximum Redundant

● Change the power specification of the “Barebone” entry at item 2.1.2.

○ From: 400W maximum Redundant

○ To: 400W (or 460W provided rack power distributor requirement is satisfied)

maximum Redundant

● Change the power specification of the “Barebone” entry at item 3.1.2.

○ From: redundant 400W maximum

○ To: redundant 400W (or 460W provided rack power distributor requirement is

satisfied) maximum

2 For OpenStack Compute Nodes in Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System, please allow us to use 2.5” Hard Disk to instead of 3.5” Hard Disk with same capacity, speed and RPM. Please refer to Item 3.3.5 (page 38).

Reason: We will use 2U Rack Server which meets the requirement 2.5” Hard Disk will be installed in the same size Rack Server. And 2.5” Hard Disk is with same capacity, speed and RPM with 3.5” Hard Disk.

Is this for item 4 “OpenStack Compute Nodes” or for item 3 “OpenStack Controller Nodes”?

There is no item 3.3.5 under the “Supply, Delivery, Installation and Configuration of One (1) Lot

of OpenStack Cloud System”

In any case, we can change the hard disk drive size:

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud

System”,

● Change the disk size specification of the “Hard Drives” entry at item 4.1.5.

○ From: 3.5”

○ To: 3.5” or 2.5”

● Change the disk size specification of the “Hard Drives” entry at item 4.2.7.1.

○ From: 3.5”

○ To: 3.5” or 2.5”

3 For Computer Servers in Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster System, please allow us to use 2 Nodes Server in 2U Rack, with totally 16 SATA or SAS Hard Disk Slot to instead of 2 Nodes Server in 2U Rack, with totally 24 SATA or SAS Hard Disk Slot. Please refer to Item 2.1.2 (page 98)

Reason: Actual Hard Disk requirement for Computer Server is 4 SATA SSD Hard Disk. 16 Hard Disk Slot is already enough for storage capacity expansion in future.

OK

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster

System”,

● Change the number of Hard Disk Slot of the “Barebone” entry at item 2.1.2.

○ From: 24 x SATA/SAS

○ To: at least 16 x SATA/SAS

4 For SAN Storage Server in Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster System, please allow us to use 4 Form Factor instead of 3U Form Factor. Please refer to Item 3.6.1. (page 103)

Reason: In this LOT Rack, no matter 3U or 4U Form Factor devise, Rack capacity is enough to install this SAN Storage Server. And for 4U Form Factor device, we already provide additional Hard Disk Slot for expansion.

OK

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster

System”,

● Change the specification of the “Form factor” entry at item 3.6.1.

○ From: 3RU

○ To: 4RU maximum

5 For 4U Ceph OSD Node in Supply, Delivery, Installation and Configuration of One (1) Lot of Object-based Storage System, please allow us to use 2.5” 400GB SATA 6Gbps SSD to instead of 2.5” 400GB SAS 6Gbps SSD. Please refer to item 2.1.6.2 (page 63)

Reason: 2.5” 400GB SATA 6Gbps SSD is same speed of 2.5” 400GB 6Gbps SSD. If ICTO want high performance, every vendor should provide 2.5” 400GB SAS 12Gbps SSD.

OK

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of Object­based

Storage System”,

● Change the SSD specification under the “Storage Drive” entry at items 2.1.6.2. and

2.2.9.1.

○ From: Six (6) 2.5″ 400GB SAS 6Gbps SSD

○ To:Two (2) 800GB PCI­E Non­Volatile Memory Express (NVMe) Storage Cards

(optimized for write­intensive applications) or equivalent

6 For Supply, Delivery, Installation and Configuration of One (1) Lot of Core Switch Modules, please allow us to use new core switch device to instead only providing switch module. Please refer to Item 1 (page 116).

Reason: The core switch is the most concentrated data exchange point and need to be stable for 10 years or more. The existing core switch had limited switching capacity and cannot be upgraded. This will be become the bottleneck of the system in the near future. So we would to provide higher core switch model to increase total switch capacity.

OK.

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of Core Switch

Modules”,

Add item 1.8

1.1.

1.2.

1.3.

1.4.

1.5.

1.6.

1.7.

1.8. Alternative Option

1.8.1. If there is a need to provide new core switch/es to meet the core switch

module specifications, the following are minimum requirements:

1.8.2. Two (2) units of ethernet chassis switch with a minimum of 4U form

factor. For each chassis:

1.8.2.1. 6­slot chassis including fan tray

1.8.2.2. 600W/900W 100­240V PSU

1.8.2.3. Switching Capacity: 1952 Gbps

1.8.2.4. Layer 2 and Layer 3 HW forwarding rate: 1420 Mpps

1.8.2.5. Port capacity:

1.8.2.5.1. 40 ports 10GBASE­X (32 ports if 2 management switch modules)

1.8.2.5.2. 384 ports 10/100/1000BASE­T

1.8.2.5.3. 248 ports 1000BASE­X SFP (208 ports if management switch modules)

1.8.2.5.4. 120 ports 10GBASE­X SFP+ (96 ports if management switch modules)

1.8.2.6. Power: 45W

1.8.3. Two (2) units of management switch modules with I/O port, each

having:

1.8.3.1. CPU: Dual core, 700MHz

1.8.3.2. DRAM: 1GB ECC SDRAM

1.8.3.3. Flash: 512MB Compact

1.8.3.4. Slot Capacity: Up to 160Gbps

1.8.3.5. Features

1.8.3.5.1. Hitless failover

1.8.3.5.2. Clear flow

1.8.3.6. Uplink support: 2­port 10GBASE­SFP+

1.8.3.7. Power: 150W

1.8.4. Four (4) units of fully populated 24­port 10GBASE­SR SFP+ modules,

each having:

1.8.4.1. ACL Hardware Resources: 2k ACLs per 12­port block

1.8.4.2. Policy based routing

1.8.4.3. sFlow sampling hardware

1.8.4.4. CLEAR­Flow

1.8.4.5. Backplane capacity:

1.8.4.5.1. 128 Gbps for 2 management switch modules

1.8.4.5.2. 64 Gbps for 1 management switch module

1.8.4.6. 128 load sharing groups

1.8.4.7. 32k layer 2 MAC FDB

1.8.4.8. 12k IPv4 longest prefix match entries

1.8.4.9. 8k IPv4 host table

1.8.4.10. Extended IPv4 host cache

1.8.4.11. 6k IP multicast (S,G,V)

1.8.4.12. IPv6 forwarding hardware

1.8.4.13. Compliance: IEEE 802.3ae 10GBASE­X

1.8.5. Four (4) units of 10GBASE­LR SMF SFP+ modules

1.8.6. Environmental Conditions:

1.8.6.1. Operating Temperature: 0° C to 40° C (32° F to 104° F)

1.8.6.2. Operating Relative, non­condensing humidity: 10% to 93%

1.8.6.3. Operational Shock: 30 m/s2 (3g), 11ms, 60 Shocks

1.8.6.4. Operational Sine Vibration: 5­100­5 HZ @ 0.2G, 0­Peak, 01

Oct./min.

1.8.6.5. Operational Random Vibration: 3­500MHz @ 1.5g rms

1.8.7. Standards

1.8.7.1. IT Equipment Safety Standards

1.8.7.1.1. UL 60950­1:2003 1st Ed., Listed Device (U.S.)

1.8.7.1.2. CSA 22.2#60950­1­03 1st Ed.(Canada)

1.8.7.1.3. Complies with FCC 21CFR Chapter1, Subchapter J (U.S.

Laser Safety)

1.8.7.1.4. CDRH Letter of Approval (U.S. FDA Approval)

1.8.7.1.5. IEEE 802.3af 6­2003 Environment A for PoE Applications

1.8.7.1.6. EN60950­1:2001+A11 or equivalent

1.8.7.1.7. EN 60825­1+A2:2001 (Lasers Safety)

1.8.7.1.8. TUV­R GS Mark by German Notified Body

1.8.7.1.9. 73/23/EEC Low Voltage Directive

1.8.7.1.10. CB Report & Certificate per IEC 60950­1:2001+All

Country

1.8.7.1.11. AS/NZX 60950­1 (Australia/New Zealand)

1.8.7.2. EMI/EMC Standards/Certifications

1.8.7.2.1. FCC CFR 47 part 15 Class A (U.S.)

1.8.7.2.2. ICES­003 Class A (Canada)

1.8.7.2.3. EN 55022:1998 Class A

1.8.7.2.4. EN 55024:1998 Class A

1.8.7.2.5. Includes IEC 61000­4­2, 3, 4, 5, 6, 8, 11

1.8.7.2.6. EN 61000­3­2,3 (Harmonics & Flicker)

1.8.7.2.7. ETSI EN 300 386:2001 (EMC Telecommunications)

1.8.7.2.8. 89/336/EEC EMC Directive

1.8.7.2.9. CISPR 22:1997 Class A (International Emissions)

1.8.7.2.10. CISPR 24:1997 Class A (International Immunity)

1.8.7.2.11. IEC/EN 61000­4­2 Electrostatic Discharge, 8kV Contact,

15kV Air, Criteria A

1.8.7.2.12. IEC/EN 61000­4­3 Radiated Immunity 10V/m, Criteria A

1.8.7.2.13. IEC/EN 61000­4­4 Transient Burst, 1kV, Criteria A

1.8.7.2.14. IEC/EN 61000­4­5 Surge, 2kV, 4kV, Criteria A

1.8.7.2.15. IEC/EN 61000­4­6 Conducted Immunity, 0.15­80MHz,

10V/m unmod. RMS, Criteria A

1.8.7.2.16. IEC/EN 61000­4­11 Power Dips & Interruptions, >30%, 25

periods, Criteria C

1.8.7.3. International Standards

1.8.7.3.1. VCCI Class A (Japan Emissions)

1.8.7.3.2. AS/NZS 3548 ACA (Australia Emissions)

1.8.7.3.3. CNS 13438:1997 Class A (BSMI­Taiwan)

1.8.7.3.4. NOM/NYCE (Mexico)MIC Mark, EMC Approval (Korea)

1.8.7.4. Telecom Standards

1.8.7.4.1. ETSI EN 300 386:2001 (EMC Telecommunications)

1.8.7.4.2. ETSI EN 300 019 (Environmental for Telecommunications)

1.8.7.5. IEEE 802.3 Media Access Standards

1.8.7.5.1. IEEE 802.3z 1000BASE­X

1.8.7.5.2. IEEE 802.3ab 1000BASE­T

1.8.7.5.3. IEEE 802.3ae 10GBASE­X

1.8.7.5.4. IEEE 802.3ak 10GBASE­CX4

1.8.7.5.5. IEEE 802.3af Power over Ethernet

1.8.7.6. Environmental Standards

1.8.7.6.1. EN/ETSI 300 019­2­1 v2.1.2 – Class 1.2 Storage

1.8.7.6.2. EN/ETSI 300 019­2­2 v2.1.2 – Class 2.3 Transportation

1.8.7.6.3. EN/ETSI 300 019­2­3 v2.1.2 – Class 1e Operational

1.8.7.6.4. EN/ETSI 300 753 (1997­10) – Acoustic Noise

1.8.7.6.5. NEBS GR­63 Issue 2 – Sound Pressure

1.8.7.6.6. ASTM D3580 Random Vibration Unpackaged 1.5G

1.8.7.7. Security

1.8.7.7.1. Common Criteria EAL3+

7 For Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System, One Lot of Object-based Storage System, One (1) Lot of Data Warehouse System, and One (1) Lot of Management Cluster System, please allow u to propose the following modifications to the existing specification:

  1. Change the ACL Requirement to 4,000 ingress. Please refer to Item 6.3.8 (Page 49), Item 4.3.8 (Page 69), Item 4.3.8 (Page 69), Item 3.3.8 (page 84), Item 5.3.8 (Page 106).
  2. Remove all the items with EDP, ELRP, EMISTP, ESRP and EAPS. Please refer to Item 6.8.1 (Page 50, 51, 52), Item 4.8.1 (Page 70, 71, 72, 73), Item 3.8.1 (Page 85, 86, 87), Item 5.8.1 (Page 107, 108, 109).
  3. Remove the LLDP-MED extensions support. Please refer to Item 6.8.1 (Page 50), Item 4.8.1 (page 70), Item 3.8.1 (Page 85), Item 5.8.1 (page 107).
  4. Remove the part for network login (“Network login Web based method802.1X methodMAC based, methodLocal database for MAC/Web based, Methods Integration with Microsoft NAP Multiple Supplicants, Same VLAN HTTP/SSL for web-based method, Network login—Multiple supplicants, multiple VLANs, Trusted OUI, MAC security Lockdown Limit”). Please refer to Item 6.8.1 (Page 51), Item 4.8.1 (Page 71), Item 3.8.1 (Page 86), Item 5.8.1 (Page 107).
  5. Remove the MLD v1 snooping and MLD v2 snooping. Please refer to Item 6.8.1 (Page 52), Item 3.8.1 (Page 86), Item 5.8.1 (Page 107).
  6. Remove “Connectivity Fault Management (CFM)” and “Y.1731 compliant frame delay and delay variance measurement”. Please refer to Item 6.8.1 (Page 52), Item 4.8.1 (Page 72), Item 3.8.1 (Page 87), Item 5.8.1 (Page 108)
  7. Remove “Universal Port –VoIP auto configuration”. Please refer to Item 6.8.1 (page 52), Item 4.8.1 (Page 72), Item 3.8.1 (Page 87), Item 5.8.1 (Page 108).
  8. Replace “EN60950-1:2006 TUV-R  GS mark,” by “EN60950-1:2006+A11:2009+A12:2011”. Please refer to Item 6.11.1.4 (Page 54), Item 4.11.1.4 (Page 73), Item 3.11.1.4 (page 89), Item 5.11.1.4 (Page 110).
  9. Remove “2006/95/EC Low Voltage Directive”. Please refer to Item 6.11.1.6 (Page 54, Item 4.11.1.6 (Page 74), Item 3.11.1.6 (Page 89), Item 5.11.1.6 (Page 110)
  10. Remove “ASTM D3580 Random Vibration Unpackaged 1.5G”. Please refer to Item 6.11.5.5 (Page 55), Item 4.11.5.5 (Page 75), Item 3.11.5.5 (Page 90), Item 5.11.5.5. (Page 111).

Reason:

  1. Based in industrial common practice, the ingress ACL is usually 4,000 (this is the same for Cisco, Arista and Huawei)
  2. EDP, ELRP, EMISTP, ESRP and EAPS are all extreme proprietary protocol, which cannot be supported by other venders.
  3. LLDP-MED extensions are used for IP Phone / PoE scenarios. These required switches are for data centers and not for IP Phone or PoE devices. These do not apply to data center environment.
  4. Network login features are only for end-users devices such as laptop, PC and handphones. These do not apply to data center environment.
  5. MDL v1/v2 snooping are for IPv6 multicast and there is nearly no IPv6 multicast application even in development.
  6. CFM and Y.1731 are used for carrier grade long-distance ethernet to speed p the fault isolation and fail-over. In a datacenter environment, the distances are short and fault can be detected almost in no time.
  7. “Universal Port – VoIP auto configuration” is for IP Phones. This does not apply to data center environment.
  8. The newest standard of EN 60950-1:2006 is EN60950-1:2009+A11:2010+A12:2011.
  9. The “2006/95/EC Low Voltage Directive” does not supply any specific technical standards, instead it is relying on IEC 60950-1:2005, which is already included in item 6.11.1.7.
  10. The “ASTM D3580 Random Vibration Unpackaged 1.5G” standard is for products under transportation environment. These do not apply to data center environment.
OK. It is item 7.3.8 of the One (1) Lot of OpenStack Cloud System instead of 6.3.8, right?

 

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System” item 7.3.8., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Object-based Storage System” item 4.3.8., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Data Warehouse System” item 3.3.8., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster System” item 5.3.8.

Change the ingress requirement of the “ACL rules” entry at item 7.3.8.

From: 4,096 ingress

To: 4,000 ingress

 

 

OK,  It is item 7.8.1. of the One (1) Lot of OpenStack Cloud System instead of 6.8.1., right?

 

This will be changed:

 

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System” item 7.8.1., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Object-based Storage System” item 4.8.1., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Data Warehouse System” item 3.8.1., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster System” item 5.8.1.

Change the “Ethernet Switch Features”:

From: EDP, Network virtualization, Identity Management, LLDP 802.1ab, LLDP-MED extensions, VLANS (Port-based and tagged trunks, MAC-based, Protocol-based, Private VLANs, VLAN translation), VMANs (Q-in-Q tunneling/IEEE 802.1ad, Egress queue selection based on 802.1p value in S-tag and C-tag, secondary ethertype support, customer edge port or Selective Q-in-Q, customer edge port CVID egress filtering/CVID translation), L2 ping or traceroute 802.1ag, Jumbo frames, QoS (egress port rate shaping/limiting, egress queue rate shaping/limiting), Link Aggregation Groups(LAG) static 802.3ad, LAG dynamic (802.3ad LACP) edge to servers only, LAG (802.3ad LACP) core between switches, Port loopback detection and shutdown (ELRP CLI), Software redundant port, STP (802.1D, 802.1s, 802.1w), STP EMISTP + PVST + compatibility mode (1 domain per port), STP EMISTP PVST + Full (multi-domain support), ERPS (4 max rings with matching ring ports), ESRP aware, EAPS edge (4 max domains with matching ring ports), Link Fault Signaling, Link Status Monitoring, ACLs (applied on ingress ports IPV4 static, on ingress ports IPV6 dynamic and egress ports, ingress meters and egress meters, L2 protocol tunneling Byte counters), CPU DoS protect, CPU monitoring, SNMPv3, SSH2 server and client, SCP/SFTP client and server, RADIUS and TACACS+ per command authentication, Network login Web based method802.1X methodMAC based, methodLocal database for MAC/Web based, Methods Integration with Microsoft NAP Multiple Supplicants, Same VLAN HTTPS/SSL for web-based method, Network login—Multiple supplicants, multiple VLANs, Trusted OUI, MAC security Lockdown Limit, IP security—DHCP Option 82—L2 mode, IP security-DHCP Option 82—L2 mode VLAN ID, IP security—DHCP IP lockdown, IP security—Trusted DHCP server ports, Static IGMP membership, IGMP filters, IPv4 unicast L2 switching, IPv4 multicast L2 switching IPv4 directed broadcast, IPv4 Fast-direct broadcast Ignore broadcast, IPv6 unicast L2 switching, IPv6 multicast L2 switching, IPv6 NetTools (ping, traceroute, BOOTP relay, DHCP, DNS, and SNTP), IGMP v1/v2 snooping, IGMP v3 snooping, Multicast VLAN Registration (MVR), Static MLD membership, MLD filters, MLD v1 snooping, MLD v2 snooping, sFlow accounting, CLI scripting, Web-based device management, Web-based management—HTTPS/SSL support, XML APIs (for partner integration), MIBs – Entity, for inventory, Connectivity Fault Management (CFM), Remote mirroring, Egress mirroring, Y.1731 compliant frame delay and delay variance measurement, MVRP – VLAN, Topology Management, CLEAR-flow, EAPS edge (4 max domains, single physical ring), System virtual routers (VRs), User-created Virtual Routers (VRs), Virtual Router and Forwarding (VRF), VLAN aggregation, Multinetting for forwarding, UDP Forwarding, UDP BootP relay forwarding, IPv4 Duplicate Address Detection (DAD), IPv4 unicast routing, including static routes, IPv4 multicast routing, including static routes, IPv6 unicast routing, including static routes, IPv6 interworking—IPv6-to-IPv4 and IPv6-in-IPv4 configured, IPv6 Duplicate Address Detection (DAD) without CLI management, IP security (DHCP Option 82—L3 modeDHCP Option 82—L3 mode VLAN IDDisable ARP learning Gratuitous ARP protection DHCP secured, ARP / ARP validation, IP address security (DHCP snooping, Trusted DHCP server, Source IP lockdown ARP validation), Multi-Switch Link Aggregation Group (MLAG), Policy based routing (PBR) for IPv4, Policy based routing (PBR) for IPv6, PIM snooping, Protocol-based VLANs, RIP v1/v2, RIPng, Routing access policies, Route maps, Universal Port—VoIP auto configuration, Universal Port—Dynamic user-based security policies, Universal Port—Time-of-day policies, switch stacking using native or dedicated ports, EAPS Advanced Edge (multiple physical rings), ERPS-more domains (allows 32 rings with matching ring ports), ESRP-Full, ESRP-Virtual MAC, OSPFv2-Edge (limited to max of 4 active interfaces), OSPFv3-Edge (limited to max of 4 active interfaces), PIM-SM-Edge (limited to max of 2 active interfaces), VRRP, OpenFlow 1.3

To: Network virtualization, Identity Management, LLDP 802.1ab, LLDP-MED extensions, VLANS (Port-based and tagged trunks, MAC-based, Protocol-based, Private VLANs, VLAN translation), VMANs (Q-in-Q tunneling/IEEE 802.1ad, Egress queue selection based on 802.1p value in S-tag and C-tag, secondary ethertype support, customer edge port or Selective Q-in-Q, customer edge port CVID egress filtering/CVID translation), L2 ping or traceroute 802.1ag, Jumbo frames, QoS (egress port rate shaping/limiting, egress queue rate shaping/limiting), Link Aggregation Groups(LAG) static 802.3ad, LAG dynamic (802.3ad LACP) edge to servers only, LAG (802.3ad LACP) core between switches, Port loopback detection and shutdown, Software redundant port, STP (802.1D, 802.1s, 802.1w), STP PVST + compatibility mode (1 domain per port), STP PVST + Full (multi-domain support), ERPS (4 max rings with matching ring ports), Link Fault Signaling, Link Status Monitoring, ACLs (applied on ingress ports IPV4 static, on ingress ports IPV6 dynamic and egress ports, ingress meters and egress meters, L2 protocol tunneling Byte counters), CPU DoS protect, CPU monitoring, SNMPv3, SSH2 server and client, SCP/SFTP client and server, RADIUS and TACACS+ per command authentication, Network login Web based method802.1X methodMAC based, methodLocal database for MAC/Web based, Methods Integration with Microsoft NAP Multiple Supplicants, Same VLAN HTTPS/SSL for web-based method, Network login—Multiple supplicants, multiple VLANs, Trusted OUI, MAC security Lockdown Limit, IP security—DHCP Option 82—L2 mode, IP security-DHCP Option 82—L2 mode VLAN ID, IP security—DHCP IP lockdown, IP security—Trusted DHCP server ports, Static IGMP membership, IGMP filters, IPv4 unicast L2 switching, IPv4 multicast L2 switching IPv4 directed broadcast, IPv4 Fast-direct broadcast Ignore broadcast, IPv6 unicast L2 switching, IPv6 multicast L2 switching, IPv6 NetTools (ping, traceroute, BOOTP relay, DHCP, DNS, and SNTP), IGMP v1/v2 snooping, IGMP v3 snooping, Multicast VLAN Registration (MVR), Static MLD membership, MLD filters, MLD v1 snooping, MLD v2 snooping, sFlow accounting, CLI scripting, Web-based device management, Web-based management—HTTPS/SSL support, XML APIs (for partner integration), MIBs – Entity, for inventory, Connectivity Fault Management (CFM), Remote mirroring, Egress mirroring, Y.1731 compliant frame delay and delay variance measurement, MVRP – VLAN, Topology Management, CLEAR-flow, System virtual routers (VRs), User-created Virtual Routers (VRs), Virtual Router and Forwarding (VRF), VLAN aggregation, Multinetting for forwarding, UDP Forwarding, UDP BootP relay forwarding, IPv4 Duplicate Address Detection (DAD), IPv4 unicast routing, including static routes, IPv4 multicast routing, including static routes, IPv6 unicast routing, including static routes, IPv6 interworking—IPv6-to-IPv4 and IPv6-in-IPv4 configured, IPv6 Duplicate Address Detection (DAD) without CLI management, IP security (DHCP Option 82—L3 modeDHCP Option 82—L3 mode VLAN IDDisable ARP learning Gratuitous ARP protection DHCP secured, ARP / ARP validation, IP address security (DHCP snooping, Trusted DHCP server, Source IP lockdown ARP validation), Multi-Switch Link Aggregation Group (MLAG), Policy based routing (PBR) for IPv4, Policy based routing (PBR) for IPv6, PIM snooping, Protocol-based VLANs, RIP v1/v2, RIPng, Routing access policies, Route maps, Universal Port—VoIP auto configuration, Universal Port—Dynamic user-based security policies, Universal Port—Time-of-day policies, switch stacking using native or dedicated ports,ERPS-more domains (allows 32 rings with matching ring ports), OSPFv2-Edge (limited to max of 4 active interfaces), OSPFv3-Edge (limited to max of 4 active interfaces), PIM-SM-Edge (limited to max of 2 active interfaces), VRRP, OpenFlow 1.3, ethernet automatic protection switching, neighboring switch discovery, Layer 2 loop detection, MSTP, redundant Layer 2 and routing services

 

No. We have plans of using this in the future.

No. We have plans of using this in the future.

No. We are using IPv6 in our systems already.

No. We want a carrier-grade equipment for reliability design.

No. We have plans of using this in the future.

OK.  It is item 7.11.1.4. of the One (1) Lot of OpenStack Cloud System instead of 6.11.1.4. right?

 

 

For the “Supply, Delivery, Installation and Configuration of One (1) Lot of OpenStack Cloud System” item 7.11.1.4., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Object-based Storage System,” item 4.11.1.4., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Data Warehouse System” item 3.11.1.4., for the “Supply, Delivery, Installation and Configuration of One (1) Lot of Management Cluster System” item 5.11.1.4.,

Change the “IT Equipment Safety Standards” entry at item 7.11.1.4.

From: EN60950-1:2006 TUV-R GS mark

To: EN60950-1:2006 or equivalent

We would like to iterate this portion.

We would like to ensure that the equipment we are procuring is subject to a standard environmental condition.

8 On Page 50 of Bid Data Sheet item no 29.2 (e) – Valid and Current Certificate of Distributorship, Dealership, or Resellership of the product being offered, issued by the principal or manufacturer of the product (if Bidder is not the manufacturer). If not directly issued by the manufacturer to the supplier, must also submit or include CTC of valid and current Certificate of Distributorship, Dealership, or Resellership that will link supplier to the manufacturer.

Clarification: Please confirm if the bidder is required to submit Manufacturer’s certification for all items or for Major Components (servers and network equipment) only?

 

Yes.
9 On page 50-51 of Bid Data Sheet item no. 29.2 (e) – Valid and Current ISO 9001 Quality Management System Certificate issued to the manufacturer by an Independent Certifying body.

Clarification: Please confirm if the bidder is required to submit ISO certification for all items or for Major Components (Servers and Network Equipments) only?

Yes.
10 Please note also that the required quantity of Ceph Monitor Nodes under the Technical Specifications is Three (3) units (please see on page 84. Item no 2. Three (3) units of Ceph Monitor Node) but the required quantity on the Annex VII-A Detailed Financial Breakdown is six (6) units (please see on page 194. Under OPENSTACK RACK, Ceph Monitor Nodes with qty of 6 units). Ceph Monitor Nodes of the OpenStack Cloud System should be three (3) units overall. For the three (3) racks of the OpenStack CLoud system, there should be one (1) monitor node for each OpenStack rack.
11 We would like to appeal if there is a possibility that you can relax your technical specifications for all of the servers due to the reason of having a detailed technical requirement which happens that only one brand can comply with your current technical specifications. May we suggest if we can remove the TECHNICAL SPECIFICATIONS portion and remain the HARDWARE SUMMARY portion as reference for all of your server requirement? In addition, if we can just propose Manufacturer’s Standard in the HARDWARE SUMMARY like with BAREBONE RAID Controller, Rack form factor and Power Supply. By doing so, this will allow more globally known brands of server to be able to participate with your requirements. The technical specifications section details each hardware component that comprises the server system. The power supply specification is required based on our data center rack layout design. Specifications indicated in the TOR are minimum requirements and should not be treated as absolute.
12 For all server requirements, in complying with the total memory and storage of the servers can we use a different configuration however maintaining the same capacity for the DIMM (memory) and HDD (hard disk) required?

Example: 32GB memory can be fulfilled either by 4 x 8GB or 2 x 16GB.

No. We have designed the configuration to accommodate future upgrades.
13 For Open stack Compute Nodes (page 99) and Compute Nodes (page 157), can we propose 2 units of 1U servers instead of 2U 2 nodes? We can give you the same performance and better redundancy with the same space requirement. Again, 2U 2 nodes pertains to a certain brand. No. We have designed the configuration to accommodate future upgrades.
14 For Openstack Compute Nodes, processor specified (e5-2697v2) is only scalable up to 2 processors per node but quantity stated is 4. Can we interpret all hardware specifications is total for 2 nodes combined? Yes.
15 For 4U Ceph Nodes, the amount of disks needed within a 4U rack space is pointing to one brand. Suggestion is to focus on total capacity (for SSD and SAS/SATA) while maintaining the initially allocated 4U rack space. The 4U Ceph Nodes specifications are not brand-specific.
16 To ensure compatibility and support with most Hardware vendors, suggestion is to use Enterprise Linux, either RHEL or SUSE. Yes.
17 For all hardware Specification under 10/40 Gbps Layer 3 Ethernet Switches IT Equipment Safety Standards) EN 60825-1+A2:2007 = May we request to change therequirment to EN 60825-1:2007 and omit the “+A2”? This might be a typographical error for this standard do not have “+A2”. Yes. You may omit the +A2.

 

For the “Supply, Delivery, Installation and Configuration of the One (1) Lot of Object-based Storage System” item 4.11.1.5., for the “Supply, Delivery, Installation and Configuration of the One (1) Lot of Data Warehouse System” item 3.11.1.5., and for the “Supply, Delivery, Installation and Configuration of the One (1) Lot of  Management Cluster System” item 5.11.1.5.

 

Remove the “+A2” detail:

From: EN 60825-1+A2:2007

To: EN 60825-1:2007

18 For all hardware specifications under 10/40Gbps Layer 3 Ethernet Switches – IEEE 802.3 Media Access Standards) – IEEE 802.3ab 100BASE-T, IEEE 802.3z 1000BASE-X, IEEE 802.3ae 10GBASE-X, and IEEE 802.3ba 40 GBASA-X) means that 1/40 Gbps Layer 3 switches should be able support this type standards? For this will depend on the SFP Transceivers you are going to use on the switch. Yes.
19 For all hardware specifications under 10/40Gbps Layer 3 Ethernet Switches – We would like to clarify what kind of training and certification do you require for this Item? Can this be done during the installation in your office and transfer knowledge the product? Yes.
20 For all hardware specifications under 10/40Gbps Layer 3 Ethernet Switches  – We would like to clarify what kind of Training and certification do you require for this Item? Can this be done during the installation in your office and transfer of knowledge of the product? Yes.
21 For all hardware specifications under Core switch Module – We would like to clarify if you already have the Core switch box and that you only need additional switch modules on your existing core switch? Or should we provide Core Switch with 2×48 ports switch modules installed in the core switch? Yes, we have an existing core switch. We only need additional core switch modules for this lot but you may opt to provide core switch based on its minimum specifications (please see addition to the TOR).
22 May we know, what is your existing Rack Cabinet brand in order to provide compatible 1U tool less rack filler including mounting holes and clips? APC Rack cabinets
23 For all technical specifications under Input/Output: Four (4) rear USB 2.0 ports and two (2) via header USB 2.0 ports.

May we request to consider just the total number of USB Ports irregardless of its location/ position in the server.

There are no USB 2.0 ports specifications with location/position such as “rear” nor “header” at the TOR.
24 For all technical specifications under Operating Environment/Compliance:

-Aside from the ROHS compliance, on the detailed Environmental Specifications, may we request to read as follow “Environmental Specifications or equivalent based on ASHRAE Class A3 thermal guidelines.”

No. The ASHRAE Class A3 covers data center environment guidelines while the RoHS directive covers substance restrictions.
25 For all technical specifications under Power Supply: AC Input, DC Output and Power Distributor.

May we request to remove the detailed technical specifications and consider manufacturer’s standard and certification compliance?

From the previous GCP Hardware bidding, the power specifications are modified: “AC Input” and other power input specifications are retained while the “DC Output” and “Power Distributor” requirements are removed.

Additional changes in the Technical Specifications:

For the “Supply, Delivery, Installation and Configuration of the One (1) Lot of OpenStack Cloud System”,

  • Change the number of HDDs under the “Storage Drive entry at item 5.1.6.1.
    • From: Twenty (20)
    • To: Twenty-four (24)
  • Change the SSD specification under the “Storage Drive” entry at item 5.1.6.2.
    • From: Four (4) 400GB SATA 6.0 Gb/s – 2.5″ SSD
    • To: One (1) 800GB PCI-E Non-Volatile Memory Express (NVMe) Storage Cards (optimized for write-intensive applications) or equivalent
  • Add description at item 5.2.2.2.
    • From: Up to 130W TDP
    • To: Up to 130W Total Dissipated Power (TDP)
  • Edit numbering from item 5.2.7.7.
    • From: “5.2.7.7. One (1) Fast UART portHard Drives”
    • To:

“        5.2.7.7. One (1) Fast UART port

5.2.8. Hard Drives”

  • Change number of HDDs and edit the numbering of item 5.2.7.8.
    • From: “5.2.7.8. Twenty (20) 1.2TB SAS 2.0 6.0Gb/s – 10000RPM – 2.5″ HDD, 64MB Cache”
    • To: “5.2.8.1. Twenty-four (24) 1.2TB SAS 2.0 6.0Gb/s – 10000RPM – 2.5″ HDD, 64MB Cache”
  • Change specification and edit the numbering of item 5.2.7.9.
    • From: “5.2.7.9. Four (4) 400GB SATA 3.0 6.0Gb/s – 2.5” SSD”
    • To: “5.2.8.2. One (1) 800GB PCI-E Non-Volatile Memory Express (NVMe) Storage Cards (optimized for write-intensive applications) or equivalent”
  • Edit numbering
    • From items “5.2.8. Boot Drive” ending at “5.2.13.2.4. 5% to 95% non-operating relative humidity (non-condensing”
    • To become as such: “5.2.9. Boot Drive” ending at “5.2.14.2.4. 5% to 95% non-operating relative humidity (non-condensing”

For the “Supply, Delivery, Installation and Configuration of the One (1) Lot of Object-based Storage System”,

  • Change the number of HDDs under the “Storage Drive” entry at items 2.1.6.1. and 2.2.9.2.
    • From: Thirty (30)
    • To: Thirty-six (36)

Kindly use the following forms attached in this Supplemental Bid Bulletin:

  • Revised Technical Specification as of 5 January 2016
  • Revised Annex VII-A Detailed Financial Breakdown as of 5 January 2016
  • Revised Annex VIII-A Goods Offered from Abroad as of 5 January 2016
  • Revised Annex VIII-B Goods Offered From Within the Philippines as of 5 January 2016

To download this supplemental bidding document, click on the link below:

Supplemental Bid Bulletin No. 2 – Supply, Delivery, Installation, Configuration of One (1) Lot of Government Common Platform Hardware

For information and guidance of all concerned.

Issued this 5th day of January 2016. 

(SGD.) DENIS F. VILLORENTE
BAC4IGOV,Chairman