Trainable Quantum Kernel
Overview
Extends quantum kernels with learnable parameters that can be optimized for specific tasks. Uses kernel-target alignment (KTA) as the training objective.
| Property | Value |
|---|---|
| Category | Machine Learning |
| Difficulty | Advanced |
| Framework | PennyLane |
| Qubits | 4 |
| Depth | ~12 |
| Gates | RX, RY, RZ, CNOT |
The Method
Standard Quantum Kernel
CODEK(x, y) = |⟨φ(x)|φ(y)⟩|²
Trainable Quantum Kernel
CODEK_θ(x, y) = |⟨φ_θ(x)|φ_θ(y)⟩|²
Where θ are learnable parameters in the feature map.
Kernel-Target Alignment
Measures how well kernel matches the classification task:
CODEKTA(K, y) = ⟨K, yy^T⟩_F / (||K||_F · ||yy^T||_F)
- KTA = 1: Perfect alignment
- KTA = 0: No correlation
- KTA = -1: Anti-alignment
Circuit Structure
CODELayer 1 (Data + Trainable): ┌─────────┐┌────────┐┌────────┐ q_0: ┤ RY(x₀π) ├┤ RZ(θ₀) ├┤ RY(θ₁) ├──■── ├─────────┤├────────┤├────────┤┌─┴─┐ q_1: ┤ RY(x₁π) ├┤ RZ(θ₂) ├┤ RY(θ₃) ├┤ X ├ └─────────┘└────────┘└────────┘└───┘ Repeat for n_reps layers...
Running the Circuit
PYTHONfrom circuit import run_circuit result = run_circuit(n_samples=15, n_iterations=20) print(f"Random KTA: {result['random_kta']:.4f}") print(f"Trained KTA: {result['trained_kta']:.4f}") print(f"Improvement: {result['improvement']:.4f}")
Expected Output
| Metric | Before | After |
|---|---|---|
| KTA | ~0.1-0.3 | ~0.5-0.8 |
| Classification | ~60% | ~80%+ |
Training Process
- Initialize random parameters θ₀
- Compute kernel matrix K_θ
- Evaluate KTA(K_θ, y)
- Update θ via gradient descent
- Repeat until convergence
Comparison
| Kernel Type | Parameters | Adaptability |
|---|---|---|
| Fixed | 0 | None |
| Trainable | O(n_qubits × reps) | Task-specific |
| Neural | O(many) | Very high |
Applications
- Task-specific kernels: Optimize for particular datasets
- Transfer learning: Pre-train on related tasks
- Ensemble methods: Multiple trained kernels