Two ideas come to mind.
One is to think back to grade school and just use two numbers, the numerator and denominator, for each number. Then you’d never actually do a divide, just multiply the denominator.
You’ll probably need to simplify the fractions, before displaying them. But algorithms for that are easily found. It’s GCD (greatest common denominator) as I recall.
The other idea is to find an algorithm to find the closest matching fraction to a decimal number. I used one before.
Google is your friend here. Both ideas would be the same in most programming languages.