Exam C1000 – 086 IBM Cloud Pak for Data
V3.X Administrator
1. Which role must a Red Hat OpenShift user have to install Cloud Pak for Data into an existing project?
A. admin
B. installer
C. cluster-admin
D. cpd-admin-role
2. An administrator is planning to use the Portworx storage that is included with Cloud Pak for Data. The cluster has 12 worker nodes, but only 8 of them have a raw disk designated for storage.
What is the default behavior when the administrator installs the entitled instance of Portworx on the cluster?
A. Portworx will be installed on the maximum number of entitled
nodes.
B. Portworx will be installed on the nodes with designated raw
storage.
C. Portworx will be installed on nodes up to the entitlement with
raw storage.
D. Portworx will be installed on all 12 worker nodes, regardless of
whether or not there is raw storage.
3. SMTP can be setup to send communications using a mailer daemon or a from account.
What are two parameters that must always be configured?
A. From Account
B. SMTP username
C. SMTP password
D. SMTP mail server address
4. In order to configure SSO, information must be specified about the Identity Provider in a configuration file. On which pod must this configuration file be edited?
A. portal pod
B. zen-core pod
C. usermgmt pod
D. zen-admin pod
5. Which resource would an administrator update to configure a custom name for the DNS service?
A. build/ibm-nginx
B. deploy/ibm-nginx
C. build/ibm-router
D. deploy/ibm-router
6. What is used to contain the display setting for the terms and conditions prompt in Cloud Pak for Data?
A. secret
B. JSON file
C. YAML file
D. configmap
7. When configuring Cloud Pak for Data to display a terms and conditions prompt at login, how is the configuration enabled by the administrator?
A. The configuration is enabled by copying the login configuration
file to the zen-core pods.
B. The configuration is enabled by copying the login configuration
file to the usermgmt pods.
C. The configuration is enabled in the Cloud Pak for Data console
under the navigation menu path Administer and then Manage platform.
D. The configuration is enabled in the Cloud Pak for Data console
8. When monitoring resources on the Cloud Pak for Data platform, what is a service instance?
A. A resource which is created in response of a specific user
request.
B. A resource which defines the usage for one or more pods in
Red Hat OpenShift.
C. A specific instance of a service that was installed on top of
Cloud Pak for Data.
D. An instance of a Red Hat OpenShift service which has been
created as part of the Cloud Pak for Data installation.
10. What task would an administrator perform to track resource usage for license auditing purposes?
A. Manually input the data into IBM License Metrics Tool (ILMT).
B. Go to the Platform management page every day and take a
screen capture to collect the resources.
C. Install the IBM Cloud Platform Common Services on the Red
Hat OpenShift cluster and utilize the License Service.
D. Install the Kubernetes Incubator Metrics Server on the Red Hat
OpenShift cluster and retrieve the metrics directly from the underlying data store.
11. What is indicated when the vCPU column in Manage platform states: 5.00 of 4.00?
A. The pods are using more vCPU than they reserved.
B. The pods are using more vCPU than the limit states.
C. The pods are using more vCPU than the license states.
D. The pods are using more vCPU than expected and pods will be
removed.
12. Which statement is true regarding scaling a scalable service?
A. When scaling services the administrator can choose from t-shirt
sizes.
B. The t-shirt sizes for scaling are small, medium, large and extra
C. Once scaled to the large configuration, a service can no longer be scaled down.
D. When scaling services the administrator has to choose from a
scale from 1-10 which indicate the number of pods which will be started for each component of the service.
13. Which Data Virtualization service role has access to data in all user tables and views?
A. Data Virtualization User
B. Data Virtualization Analyst
C. Data Virtualization Steward
D. Data Virtualization Engineer
14. The zen-metastoredb pods in the Cloud Pak for Data project are the only pods constantly restarting and the restart count is higher than 250. Users cannot login to the console and even after restarting the zen-metastoredb pods and all other pods in the Red Hat OpenShift project. The problem still remains.
What is the most-probable cause for the zen-metastoredb pods to continually crash?
A. NFS storage was used for the lite assembly and the NFS server
has become unavailable.
B. The metastore database pods require internet access because
they are implemented using CockroachDB and the cluster is air-gapped.
C. There are fewer than 3 worker nodes allocated to the system
and the metastore database pods have been set up with anti-affinity so they are unable to start.
D. The NTP server is misconfigured or cannot be reached or there
is a too large of a time difference between the worker nodes running the metastore database pods.
15. What three pod statuses indicate that a pod may not be running correctly?
A. Error
C. Pending
D. Running
E. Completed
F. CrashLoopBackOff
16. How can an administrator ensure that a pod will run on a specific node?
A. label pod and add nodeSelector to a node
B. label pod and add nodeAffinity to the node
C. label node and add nodeAffinity to the pod
D. label node and add nodeSelector to the pod
17. The administrator wants to resume an upgrade after having resolved the initial error with the upgrade. Which two tasks must be done?
A. Delete all the pods related to the module.
B. Delete all the pvc and pv related to the module.
C. Rollback the failed release to a previous release.
D. Update the cr-cpdinstall to re-trigger the upgrade.
E. Execute the helm delete command to delete the associated
helm chart.
18. Where must the Cloud Pak for Data administrator log-in in order to run a diagnostics job?
A. the bastion node
B. the Red Hat OpenShift console
C. the Cloud Pak for Data web client