Create your overcloud control plane image.
This is the image the undercloud will deploy to become the KVM (or QEMU, Xen, etc.) cloud control plane.
$OVERCLOUD_*_DIB_EXTRA_ARGS (CONTROL, COMPUTE, BLOCKSTORAGE) are meant to be used to pass additional build-time specific arguments to disk-image-create.
$SSL_ELEMENT is used when building a cloud with SSL endpoints - it should be set to openstack-ssl in that situation.
NODE_ARCH=$(os-apply-config -m $TE_DATAFILE --key arch --type raw)
Undercloud UI needs SNMPd for monitoring of every Overcloud node
if [ "$USE_UNDERCLOUD_UI" -ne 0 ] ; then
OVERCLOUD_CONTROL_DIB_EXTRA_ARGS="$OVERCLOUD_CONTROL_DIB_EXTRA_ARGS snmpd"
OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS="$OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS snmpd"
OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS="$OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS snmpd"
fi
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-control ntp hosts \
baremetal boot-stack cinder-api ceilometer-collector \
ceilometer-api ceilometer-agent-central ceilometer-agent-notification \
os-collect-config horizon neutron-network-node dhcp-all-interfaces \
swift-proxy swift-storage keepalived haproxy \
$DIB_COMMON_ELEMENTS $OVERCLOUD_CONTROL_DIB_EXTRA_ARGS ${SSL_ELEMENT:-} 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-control.log
Unless you are just building the images, load the image into Glance.
OVERCLOUD_CONTROL_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-control.qcow2)
Create your block storage image if some block storage nodes are to be used. This is the image the undercloud deploys for the additional cinder-volume instances.
if [ $OVERCLOUD_BLOCKSTORAGESCALE -gt 0 ]; then
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-cinder-volume ntp hosts \
baremetal cinder-volume os-collect-config \
dhcp-all-interfaces $DIB_COMMON_ELEMENTS \
$OVERCLOUD_BLOCKSTORAGE_DIB_EXTRA_ARGS 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-cinder-volume.log
And again load the image into Glance, unless you are just building the images.
OVERCLOUD_BLOCKSTORAGE_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-cinder-volume.qcow2)
fi
Create your overcloud compute image. This is the image the undercloud deploys to host KVM (or QEMU, Xen, etc.) instances.
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST \
-a $NODE_ARCH -o $TRIPLEO_ROOT/overcloud-compute ntp hosts \
baremetal nova-compute nova-kvm neutron-openvswitch-agent os-collect-config \
dhcp-all-interfaces $DIB_COMMON_ELEMENTS $OVERCLOUD_COMPUTE_DIB_EXTRA_ARGS 2>&1 | \
tee $TRIPLEO_ROOT/dib-overcloud-compute.log
Load the image into Glance. If you are just building the images you are done.
OVERCLOUD_COMPUTE_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-compute.qcow2)
For running an overcloud in VM’s. For Physical machines, set to kvm:
OVERCLOUD_LIBVIRT_TYPE=${OVERCLOUD_LIBVIRT_TYPE:-"qemu"}
Set the public interface of overcloud network node::
NeutronPublicInterface=${NeutronPublicInterface:-'eth0'}
Set the NTP server for the overcloud::
OVERCLOUD_NTP_SERVER=${OVERCLOUD_NTP_SERVER:-''}
If you want to permit VM’s access to bare metal networks, you need to define flat-networks and bridge mappings in Neutron::
OVERCLOUD_FLAT_NETWORKS=${OVERCLOUD_FLAT_NETWORKS:-''}
OVERCLOUD_BRIDGE_MAPPINGS=${OVERCLOUD_BRIDGE_MAPPINGS:-''}
OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE=${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE:-''}
OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE=${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE:-''}
OVERCLOUD_VIRTUAL_INTERFACE=${OVERCLOUD_VIRTUAL_INTERFACE:-'br-ex'}
If you are using SSL, your compute nodes will need static mappings to your endpoint in /etc/hosts (because we don’t do dynamic undercloud DNS yet). set this to the DNS name you’re using for your SSL certificate - the heat template looks up the controller address within the cloud:
OVERCLOUD_NAME=${OVERCLOUD_NAME:-''}
TripleO explicitly models key settings for OpenStack, as well as settings that require cluster awareness to configure. To configure arbitrary additional settings, provide a JSON string with them in the structure required by the template ExtraConfig parameter.
OVERCLOUD_EXTRA_CONFIG=${OVERCLOUD_EXTRA_CONFIG:-‘’}
Choose whether to deploy or update. Use stack-update to update:
HEAT_OP=stack-create
Wait for the BM cloud to register BM nodes with the scheduler:
expected_nodes=$(( $OVERCLOUD_COMPUTESCALE + $OVERCLOUD_CONTROLSCALE + $OVERCLOUD_BLOCKSTORAGESCALE ))
wait_for 60 1 wait_for_hypervisor_stats $expected_nodes
Set password for Overcloud SNMPd, same password needs to be set in Undercloud Ceilometer
UNDERCLOUD_CEILOMETER_SNMPD_PASSWORD=$(os-apply-config -m $TE_DATAFILE –key undercloud.ceilometer_snmpd_password –type raw –key-default ‘’)
Create unique credentials:
setup-overcloud-passwords $TRIPLEO_ROOT/tripleo-overcloud-passwords
source $TRIPLEO_ROOT/tripleo-overcloud-passwords
We need an environment file to store the parameters we’re gonig to give heat.:
HEAT_ENV=${HEAT_ENV:-"${TRIPLEO_ROOT}/overcloud-env.json"}
Read the heat env in for updating.:
if [ -e "${HEAT_ENV}" ]; then
ENV_JSON=$(cat "${HEAT_ENV}")
else
ENV_JSON='{"parameters":{}}'
fi
Set parameters we need to deploy a KVM cloud.:
NeutronControlPlaneID=$(neutron net-show ctlplane | grep ' id ' | awk '{print $4}')
ENV_JSON=$(jq '.parameters = {
"MysqlInnodbBufferPoolSize": 100
} + .parameters + {
"AdminPassword": "'"${OVERCLOUD_ADMIN_PASSWORD}"'",
"AdminToken": "'"${OVERCLOUD_ADMIN_TOKEN}"'",
"CeilometerPassword": "'"${OVERCLOUD_CEILOMETER_PASSWORD}"'",
"CeilometerMeteringSecret": "'"${OVERCLOUD_CEILOMETER_SECRET}"'",
"CinderPassword": "'"${OVERCLOUD_CINDER_PASSWORD}"'",
"CloudName": "'"${OVERCLOUD_NAME}"'",
"controllerImage": "'"${OVERCLOUD_CONTROL_ID}"'",
"GlancePassword": "'"${OVERCLOUD_GLANCE_PASSWORD}"'",
"HeatPassword": "'"${OVERCLOUD_HEAT_PASSWORD}"'",
"HypervisorNeutronPhysicalBridge": "'"${OVERCLOUD_HYPERVISOR_PHYSICAL_BRIDGE}"'",
"HypervisorNeutronPublicInterface": "'"${OVERCLOUD_HYPERVISOR_PUBLIC_INTERFACE}"'",
"NeutronBridgeMappings": "'"${OVERCLOUD_BRIDGE_MAPPINGS}"'",
"NeutronControlPlaneID": "'${NeutronControlPlaneID}'",
"NeutronFlatNetworks": "'"${OVERCLOUD_FLAT_NETWORKS}"'",
"NeutronPassword": "'"${OVERCLOUD_NEUTRON_PASSWORD}"'",
"NeutronPublicInterface": "'"${NeutronPublicInterface}"'",
"NovaComputeLibvirtType": "'"${OVERCLOUD_LIBVIRT_TYPE}"'",
"NovaPassword": "'"${OVERCLOUD_NOVA_PASSWORD}"'",
"NtpServer": "'"${OVERCLOUD_NTP_SERVER}"'",
"SwiftHashSuffix": "'"${OVERCLOUD_SWIFT_HASH}"'",
"SwiftPassword": "'"${OVERCLOUD_SWIFT_PASSWORD}"'",
"NovaImage": "'"${OVERCLOUD_COMPUTE_ID}"'",
"SSLCertificate": "'"${OVERCLOUD_SSL_CERT}"'",
"SSLKey": "'"${OVERCLOUD_SSL_KEY}"'"
}' <<< $ENV_JSON)
if [ $OVERCLOUD_BLOCKSTORAGESCALE -gt 0 ]; then
ENV_JSON=$(jq '.parameters = {} + .parameters + {
"BlockStorageImage": "'"${OVERCLOUD_BLOCKSTORAGE_ID}"'",
}' <<< $ENV_JSON)
fi
Save the finished environment file.:
jq . > "${HEAT_ENV}" <<< $ENV_JSON
chmod 0600 "${HEAT_ENV}"
Add Keystone certs/key into the environment file.:
generate-keystone-pki --heatenv $HEAT_ENV
Deploy an overcloud:
make -C $TRIPLEO_ROOT/tripleo-heat-templates overcloud.yaml \
COMPUTESCALE=$OVERCLOUD_COMPUTESCALE \
CONTROLSCALE=$OVERCLOUD_CONTROLSCALE \
BLOCKSTORAGESCALE=$OVERCLOUD_BLOCKSTORAGESCALE \
heat $HEAT_OP -e $TRIPLEO_ROOT/overcloud-env.json \
-f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml \
-P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \
overcloud
You can watch the console via virsh/virt-manager to observe the PXE boot/deploy process. After the deploy is complete, the machines will reboot and be available.
While we wait for the stack to come up, build an end user disk image and register it with glance.:
USER_IMG_NAME="user.qcow2"
$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create $NODE_DIST vm $TEST_IMAGE_DIB_EXTRA_ARGS \
-a $NODE_ARCH -o $TRIPLEO_ROOT/user 2>&1 | tee $TRIPLEO_ROOT/dib-user.log
Get the overcloud IP from ‘nova list’
wait_for_stack_ready $(($OVERCLOUD_STACK_TIMEOUT * 60 / 10)) 10 $STACKNAME
OVERCLOUD_ENDPOINT=$(heat output-show $STACKNAME KeystoneURL|sed 's/^"\(.*\)"$/\1/')
OVERCLOUD_IP=$(echo $OVERCLOUD_ENDPOINT | awk -F '[/:]' '{print $4}')
We don’t (yet) preserve ssh keys on rebuilds.
ssh-keygen -R $OVERCLOUD_IP
Export the overcloud endpoint and credentials to your test environment.
NEW_JSON=$(jq '.overcloud.password="'${OVERCLOUD_ADMIN_PASSWORD}'" | .overcloud.endpoint="'${OVERCLOUD_ENDPOINT}'" | .overcloud.endpointhost="'${OVERCLOUD_IP}'"' $TE_DATAFILE)
echo $NEW_JSON > $TE_DATAFILE
Source the overcloud configuration:
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc
Exclude the overcloud from proxies:
export no_proxy=$no_proxy,$OVERCLOUD_IP
If we updated the cloud we don’t need to do admin setup again - skip down to Wait for Nova Compute.
Perform admin setup of your overcloud.
init-keystone -o $OVERCLOUD_IP -t $OVERCLOUD_ADMIN_TOKEN \
-e admin.example.com -p $OVERCLOUD_ADMIN_PASSWORD -u heat-admin \
${SSLBASE:+-s $PUBLIC_API_URL}
# Creating these roles to be used by tenants using swift
keystone role-create --name=swiftoperator
keystone role-create --name=ResellerAdmin
setup-endpoints $OVERCLOUD_IP \
--cinder-password $OVERCLOUD_CINDER_PASSWORD \
--glance-password $OVERCLOUD_GLANCE_PASSWORD \
--heat-password $OVERCLOUD_HEAT_PASSWORD \
--neutron-password $OVERCLOUD_NEUTRON_PASSWORD \
--nova-password $OVERCLOUD_NOVA_PASSWORD \
--swift-password $OVERCLOUD_SWIFT_PASSWORD \
--ceilometer-password $OVERCLOUD_CEILOMETER_PASSWORD \
${SSLBASE:+--ssl $PUBLIC_API_URL}
keystone role-create --name heat_stack_user
user-config
setup-neutron "" "" 10.0.0.0/8 "" "" "" 8.8.8.8 192.0.2.45 192.0.2.64 192.0.2.0/24
OVERCLOUD_NAMESERVER=$(os-apply-config -m $TE_DATAFILE --key overcloud.nameserver --type netaddress --key-default '8.8.8.8')
If you want a demo user in your overcloud (probably a good idea).
os-adduser -p $OVERCLOUD_DEMO_PASSWORD demo demo@example.com
Workaround https://bugs.launchpad.net/diskimage-builder/+bug/1211165.
nova flavor-delete m1.tiny
nova flavor-create m1.tiny 1 512 2 1
Register the end user image with glance.
glance image-create --name user --public --disk-format qcow2 \
--container-format bare --file $TRIPLEO_ROOT/$USER_IMG_NAME
Wait for Nova Compute
wait_for 30 10 nova service-list --binary nova-compute 2\>/dev/null \| grep 'enabled.*\ up\ '
Wait for L2 Agent On Nova Compute
wait_for 30 10 neutron agent-list -f csv -c alive -c agent_type -c host \| grep "\":-).*Open vSwitch agent.*overcloud-novacompute\""
Log in as a user.
source $TRIPLEO_ROOT/tripleo-incubator/overcloudrc-user
If you just created the cloud you need to add your keypair to your user.
user-config
So that you can deploy a VM.
IMAGE_ID=$(glance image-show user | awk '/ id / {print $4}')
nova boot --key-name default --flavor m1.tiny --block-device source=image,id=$IMAGE_ID,dest=volume,size=2,shutdown=preserve,bootindex=0 demo
Add an external IP for it.
wait_for 10 5 neutron port-list -f csv -c id --quote none \| grep id
PORT=$(neutron port-list -f csv -c id --quote none | tail -n1)
FLOATINGIP=$(neutron floatingip-create ext-net \
--port-id "${PORT//[[:space:]]/}" \
| awk '$2=="floating_ip_address" {print $4}')
And allow network access to it.
neutron security-group-rule-create default --protocol icmp \
--direction ingress --port-range-min 8
neutron security-group-rule-create default --protocol tcp \
--direction ingress --port-range-min 22 --port-range-max 22
After which, you should be able to ping it
wait_for 30 10 ping -c 1 $FLOATINGIP