With binary operators, the basic axioms are
No external side-effects: a x b = a x b
Closure: the operation preserves the type; t -> t -> t
Associativity: the nesting doesn't matter; a x (b x c) = (a x b) x c
Commutativity: the order of parameters doesn't matter; a x b = b x a
Neutral element is a value that does nothing. When one of the parameters to the operator is a neutral element, the operator returns the other parameter.
Left neutral element: when applied to the left side of the operator; NL x a = a
Right neutral element: when applied to the right side of the operator; a x NR = a
Neutral element: left and right neutral elements are the same; NL = NR
Inverse is a way to make the operator return the neutral element for any value.
Left inverse: inverseL(a) x a = N
Right inverse: a x inverseR(a) = N
Inverse: inverseL = inverseR
Then there are two-operator axioms like
Left-distributivity: a x (b + c) = (a x b) + (a x c)
Right-distributivity: (b + c) x a = (b x a) + (c x a)
To use the axioms in transforming a tree of operations (say, the AST used by your calculator program), create a list of axioms matching the nodes in the tree. The new trees generated by applying the axioms in the list are the neighbor nodes of the original tree. Then you toss in a cost function and a search algorithm and emerge with a program optimizer.
The nice thing about the axioms is that they're kind of stackable. For example, if you know that your operator x is an abelian group (that is, it has closure, associativity, inverse, neutral and commutativity), an operator o, defined as a o b = a x b x C where C is a constant, is also an abelian group (ditto for permutations of the order of operations thanks to commutativity).
If you create a list of axiom-preserving tree transformations, I think you could create a search graph to figure out the axioms an operator fulfills. Take the AST of the operator, apply transformations until you find an already proven operator. Then automatically add the axioms to the operator's documentation and generate tests for them.
Maybe you could generate time-complexity figures for a function too. Determine the maximum amount of times it calls other functions wrt its parameters, multiply the other function complexities with that. For space-complexity, multiply the space-complexity of the other functions by the time they're called and add the current function's number of allocations.
For an empirical version of that, the IDE could call each pure function with different-size inputs and measure the CPU time and memory used. Then plot a nice graph, do some curve-fitting, embed into documentation. If you do this profiling live in a background process, you could have performance metrics that update while you code. A real-time view of the efficiency of your code...