This research is based on an abstract from a small study involving 128 people in Kazakhstan specifically. The evidence is preliminary because it comes from a single abstract text and did not measure health outcomes. It tested whether visual food atlases and artificial intelligence could help estimate food portion sizes better than guessing.
The results showed that using visual aids and AI models reduced errors compared to unassisted human judgment. The tools worked best for average portions, though they struggled with smaller meat-based items. Accuracy varied depending on the texture and size of the food.
Researchers noted the technology needed more refinement for complex dishes and small portion types. This was a methodological comparison, so it did not measure actual health outcomes or safety for patients. No adverse events were reported during the testing period by researchers.
Readers should view these findings as a step toward better dietary monitoring tools rather than a finished solution. Future work is needed to confirm if these methods improve long-term health in diverse settings for patients.