Section 6 is one of the heaviest and most technical domains on the CQE exam. The goal is not just to memorize formulas. It is to know which statistical method fits the situation, what the output really means, and which assumptions or misuse patterns can invalidate a conclusion.

This section rewards disciplined thinking: identify the data type, confirm stability, choose the right test or chart, and interpret results in process terms rather than just mathematical terms.

Back to CQE Prep Hub

Section 6 Flashcards

Use this deck to rehearse the quantitative methods domain quickly. Press Space to flip the card and use the left and right arrow keys to move through the deck.

CQE Section 6 Flashcards

Quantitative Methods and Tools Review Deck

Built from 1 source and tuned for faster recall, statistical interpretation, and exam-style repetition.

Press “Space” to flip Use “← / →” to navigate
0 / 0

Flip the card, self-check, then mark it correct or incorrect before moving on.

Section Scope and Exam Framing

The CQE exam uses Section 6 to test whether you can support quality decisions with data. It is less about raw calculation speed than about selection, interpretation, and avoiding incorrect conclusions.

Common question patterns include:

  • Which tool or test matches the data type and study structure?
  • Which summary statistic best represents the data?
  • What probability rule applies: addition, multiplication, conditional, independence?
  • Is the process stable enough for capability analysis?
  • Does the data support causation, or only a relationship?
  • What assumption or misuse makes the presented conclusion weak?

Start every quantitative question by identifying the type of data, the structure of the comparison, and whether the process conditions are stable.

Collecting and Summarizing Data

Data types

The first CQE decision is often whether the data are attribute or variable.

Type Meaning Examples
Discrete / attribute Countable or categorical results Pass/fail, yes/no, number of defects, defect type
Continuous / variable Measured on a continuum Length, weight, time, temperature, pressure

A common exam trap is forgetting that the same process can be described with either type. “Scratch present?” is attribute. “Scratch length in mm” is variable.

Measurement scales

  • Nominal: categories without order
  • Ordinal: ordered categories without guaranteed equal spacing
  • Interval: equal intervals but no true zero
  • Ratio: equal intervals with a meaningful zero

The practical CQE point is that the scale limits what summaries and operations are reasonable. Ordinal data should not be treated as if it has full ratio meaning.

Data collection and integrity

Data quality problems often destroy downstream analysis before any formula is applied. Common integrity threats include unclear operational definitions, transcription error, missing data, inconsistent sampling, rounding, poor time stamping, and mismatched revision or product-code logic.

Strong controls include:

  • clear operational definitions
  • validation rules and required fields
  • audit trails and time stamps
  • alignment with calibration and MSA controls
  • documented sampling and selection rules
  • trained and standardized data collectors

A larger sample size does not fix biased data. That is a classic CQE trap.

Descriptive Statistics and Graphical Methods

Central tendency, dispersion, and shape

You should know what the common descriptive statistics mean and when each one is the most informative.

  • Mean: arithmetic average, sensitive to outliers
  • Median: middle value, stronger for skewed data
  • Mode: most common value or category
  • Range: maximum minus minimum
  • Variance and standard deviation: measures of spread
  • IQR: spread of the middle 50 percent of the data
  • Skewness and kurtosis: describe shape characteristics

When data are strongly skewed or include meaningful outliers, median and IQR are often more representative than mean and standard deviation.

Graphical methods

Graphical methods support interpretation before advanced modeling.

  • Histograms: show distribution shape and spread
  • Box plots: show median, quartiles, and possible outliers
  • Stem-and-leaf plots: compact display of distribution while keeping values visible
  • Scatter plots: show possible relationships between two variables
  • Probability plots: assess whether data reasonably follow a distribution such as normal

Another recurring CQE trap: correlation shown in a scatter plot does not prove causation. The process explanation still needs evidence, theory, or designed experimentation.

Probability and Quantitative Concepts

Probability questions in CQE are usually about choosing the right rule and interpreting events correctly rather than doing exotic calculations.

Concept Key point
Addition rule Use when combining event probabilities, accounting for overlap
Multiplication rule Use for joint events, especially if independence holds
Conditional probability Probability of A given B has occurred
Independence Occurrence of one event does not change the probability of the other
Mutual exclusivity Events cannot happen together
Permutations vs combinations Permutations care about order; combinations do not

The high-value trap is confusing independence with mutual exclusivity. If two nonzero events are mutually exclusive, they are not independent.

Probability Distributions

The exam often tests whether you can choose the right distribution for the problem description rather than derive everything from scratch.

Distribution Typical use Common cue
Binomial Fixed number of yes/no trials Count defectives in n inspected units
Poisson Rare event counts over opportunity Defects per unit, area, or time
Hypergeometric Sampling without replacement from a finite lot Lot-based selection where draws affect later probabilities
Normal Symmetric continuous data and many process measures Capability, z scores, many inference methods
Exponential Time to failure with constant failure rate Reliability and waiting-time scenarios
Weibull Flexible life modeling for early-life, random, or wear-out failure Reliability shape changes with beta

Another common CQE trap is mixing up defectives and defects. Defectives usually align to binomial logic and p or np charts. Defects per unit often align better to Poisson logic and c or u charts.

Statistical Decision Making and Inference

Point and interval estimates

A point estimate gives one best estimate of a population parameter. An interval estimate gives a range of plausible values at a stated confidence level.

A confidence interval does not guarantee the true parameter is inside the interval for a specific sample. It describes the long-run behavior of the method across repeated sampling.

Hypothesis testing

Hypothesis testing is a formal way to evaluate evidence against a default assumption.

  • Null hypothesis: default or no-difference assumption
  • Alternative hypothesis: competing claim
  • Type I error: reject a true null
  • Type II error: fail to reject a false null
  • p-value: probability of observing data this extreme or more, assuming the null is true
  • Power: probability of detecting a real effect

Lowering alpha makes false alarms less likely but usually makes misses more likely unless sample size also increases.

Common tests

  • z test and t test for means
  • paired t test for before/after or matched data
  • chi-square for count-based comparisons or goodness of fit
  • F tests and ANOVA for comparing variation structures or multiple means

A common CQE trap is ignoring structure. If the same units are measured before and after, pairing matters. If multiple groups are compared, ANOVA is often more appropriate than a series of separate t tests.

Regression, Correlation, and Time Series

Regression and correlation support understanding relationships between variables, but they do not substitute for process knowledge or designed studies.

  • Correlation coefficient: direction and strength of linear association
  • R-squared: proportion of variation explained by the fitted model
  • Slope: expected change in Y for a one-unit change in X
  • Residuals: remaining unexplained error after the model

High R-squared does not prove causation. Extrapolating outside the observed range is also a classic error. Residual behavior matters; patterns in residuals often reveal that the model is incomplete or inappropriate.

For time-series thinking, watch for trend, seasonality, and autocorrelation. Data taken over time are not always independent, and that matters for interpretation.

Statistical Process Control and Capability

SPC logic

SPC distinguishes common cause variation from special cause variation. This is Deming and Shewhart territory, and it remains central to the CQE exam.

Key reminders:

  • Control limits come from the process.
  • Specification limits come from customer or engineering requirements.
  • A process can be stable but not capable.
  • A process can appear capable on paper while still being unstable.
  • Tampering with common-cause variation usually makes performance worse.

Control chart selection

  • X-bar and R: small rational subgroups of variable data
  • X-bar and s: larger subgroups of variable data
  • Individuals and moving range: one observation per period
  • p and np: defective units
  • c and u: defect counts

Rational subgrouping is another high-value concept. Good subgrouping preserves meaningful within-subgroup variation and exposes between-subgroup change.

Capability and performance indices

Capability answers whether a stable process can meet specifications.

  • Cp: potential capability based on spread relative to spec width
  • Cpk: actual short-term capability accounting for centering
  • Pp and Ppk: long-term performance versions using overall variation

The most important CQE decision rule here is simple: confirm statistical control before trusting capability indices. If the process is unstable, capability numbers can mislead.

Design of Experiments

DOE is about learning efficiently from structured experimentation. The CQE exam typically focuses on terminology, planning logic, and interpretation rather than very advanced modeling.

  • Factor: input being changed
  • Level: specific setting of a factor
  • Response: output being measured
  • Replication: repeated runs to estimate error
  • Randomization: protection against time-related nuisance effects
  • Blocking: deliberate management of known nuisance variables
  • Confounding: inability to separate effects cleanly

Be ready to distinguish:

  • completely randomized designs
  • randomized block designs
  • full and fractional factorial designs

A common exam trap is ignoring interaction effects. If interaction is strong, main-effect interpretations by themselves can be misleading.

High-Value Exam Traps and Decision Cues

  • Start with data type: variable versus attribute controls much of the rest of the decision tree.
  • Do not average attribute data and automatically treat it as fully continuous.
  • Larger sample size does not fix bias.
  • Correlation does not prove causation.
  • Independence and mutual exclusivity are different concepts.
  • Defectives and defects are not the same thing, and they map to different charts and distributions.
  • Capability analysis requires a stable process first.
  • Specification limits are not control limits.
  • If the data are paired, use a paired comparison logic.
  • If interaction is present in DOE, main effects alone can mislead.

Study Recommendations for Section 6

  1. Build a personal decision tree for chart selection, test selection, and distribution selection.
  2. Practice distinguishing defectives from defects until it is automatic.
  3. Work control-chart and capability questions together so the stability prerequisite becomes instinctive.
  4. Review examples of skewed data and decide when mean versus median is the better summary.
  5. Practice interpreting p-values, confidence intervals, and error types in process language, not just statistical language.
  6. Use real plant or service data where possible so the formulas stay anchored to practical judgment.

Section 6 becomes manageable when you study it as a decision system instead of a formula list. The exam is not looking for a calculator alone. It is looking for a quality engineer who knows when the math is being used correctly.