Yet promise does not guarantee appropriate use. First, many ML models are trained on datasets that do not reflect diverse student populations; applying them uncritically risks perpetuating inequities. Second, ML-driven recommendations can nudge curricula and assessment toward what is measurable rather than what is meaningful. Third, opacity in commercial systems limits educators’ ability to contest or contextualize automated decisions. Finally, the vendor-driven rush to “hot” solutions—fueled by platform visibility and procurement incentives—can lead to superficial adoption without sufficient teacher training, evaluation, or parental engagement.