Can we obtain an explicit and efficient analytic interpolation of tetration by this method?












3












$begingroup$


I am curious about this. It has been a very long time since I have ever toyed with this topic but it was an old interest of mine quite some time ago - maybe 8 years (250 megaseconds) ago at least, and I never really got to a conclusion, nor do I think anyone did entirely satisfactorily, and some comments on a Youtube video I was watching inspired me to dust it off and try once more to give it another thwack.



And the question is, basically, how can one construct a "reasonable" interpolation of the "tetration" operation, also called a "power tower", which for those who have not heard of it is defined for natural $n > 1$ and real $a > 1$ by



$$^{n} a = a uparrow uparrow n := underbrace{a^{a^{a^{...^a}}}}_{mbox{$n$ copies of $a$}}$$



where the nested exponentiations on the left are evaluated in rightassociative fashion, so the deepest ("highest") layer is done first, e.g.



$$^{3} 3 = 3^{3^3} = 3^{27} = 7 625 597 484 987$$



and not left-associative, i.e.



$$^{3} 3 ne (3^3)^3 = 3^{3 cdot 3} = 3^9 = 19 683$$



What the "interpolation" part means is basically, given that this definition clearly only works for values of the second argument, $n$ (called as such by analogy with exponentiation even though it is written first in the "left-superscript" notation just introduced), usually called the "height" of the tetration or power tower for obvious reasons, that are natural numbers at least 1, since we have to have a whole number of "copies of $a$" - half a copy, say, wouldn't make much sense, as while you can literally write a half-written "$a$", that is formally nonsense and has no mathematical meaning, although it may have other forms of meaning from other angles of human understanding and analysis, e.g. perhaps as a form of artsey commentary. Mathematical meaning is, though, of course, what we're interested in. We can, of course, naturally extend this in a way similar to extending exponentiation to the integers by noting that



$$^{n+1} a = a^{^n a}$$



and thus



$$^{n-1} a = log_a(^n a)$$



and if we do this we can at least extend that $^0 a = 1$, similar to exponentiation, and $^{(-1)} a = 0$, a rather interesting result when viewed in contrast to exponentiation given that the first negative exponentiation of a number is not a constant but instead its reciprocal. Of course, we cannot extend now to $^{(-2)} a$, as then we get $log_a(0)$ which is undefined (though of course if you want to stretch the maths a bit and expand the codomain to the extended reals, you can say $^{(-2)}a = -infty$. In any case though, $^{(-3)} a$ and further are definitely, really undefined, since no real exponential can be negative, much less negative infinity!). So this peters out.



Of course, the most interesting bit - as hinted at with the "half a copy" business above - is trying to extend the height $n$ to real values, presumably in $(-2, infty)$ at least.



And there have been a number of methods fielded in those past epochs which attempt to do this as well as some interesting ones regarding the conditions which are required to produce a suitably "natural" extension, given that it is trivially obvious that one can, of course, "interpolate" a given sparse sequence of points in any way that one desires and, moreover, even with the identities $^{n+1} a = a^{^n a}$, they only suffice to make it unique insofar as whole-number increments of the tower are concerned - fill any unit interval with anything you like, and the identity will extend to provide an interpolant that will satisfy it. For exponentiation, this non-uniqueness is much less of a problem because we also have the additional identity $a^{n+m} = a^n a^m$, which lets us extend to rational values, however no such identity exists for tetration.



In this regard, extension of tetration is similar to the question of extension of the factorial, which is similarly impoverished of identities, with new ones interestingly only coming about after the extension was done by Euler in the form of the gamma function, to meet a challenge originally proposed by Bernoulli to do exactly this. The gamma function, however, is still ostensibly more "natural" simply because a) it often crops up and b) it has some very cute integral representations, esp. the darlin'



$$Gamma(x) = int_{0}^{1} [-log(u)]^{n-1} du$$



(Though with regard to objection a), one could say this may be simply because we have not yet found such an expression, and thus places where it might be useful, could be currently written off as "unsolvable".)



Yet clearly, that doesn't seem to have been the case for tetration, either. Moreover, in all these past discussions, many of the extension methods proposed are in fact extremely cumbersome and computationally intensive to approximate, involving elaborate constructs like Riemann mappings, infinite limits of integral equations, and so forth - all things that are, while mathematically valid, both inelegant and also not something you're going be able to program into a software pack like Mathematica and have it spit out 2500 digits of $^{1/2} 2$ in the blink of an eye.



But nonetheless, one particular method out of these proposed methods seems be both fairly simple and like that it might possible be amenable to more detailed analysis, and that is the "Carleman matrix" operator method.



This method is most succinctly expressed for the specific case $a = e$, to construct the "natural tetrational" $mathrm{tet}(x) := ^x e$ with real height $x$, so we'll just focus on that for now. But basically it is based on the following two observations. The first is that one can consider the result of the height-$n$ power tower of $e$ as the iterated exponential evaluated at $1$, namely



$$^n e = exp^n(1)$$



or perhaps more nicely for what we're about to do,



$$^{n-1} e = exp^n(0)$$



which has some interesting gamma-function like quality about it with the offset.



And the second one is the following. If we let $exp$ be given by its power series,



$$exp(x) = sum_{n=0}^{infty} frac{x^n}{n!}$$



then we can actually represent such iterated exponentials using what is called its Carleman matrix, basically the infinite-order "matrix" with entries



$$C[exp]_{ij} = frac{i^j}{j!}$$



such that if we have the infinite vector of exponential coefficients $mathbf{a} = left[ frac{1}{1!} frac{1}{2!} frac{1}{3!} cdots right]^T$ then the vector $mathbf{b}_n = (mathbf{C}[exp])^n mathbf{a}$ is the coefficients of $exp^n$. In particular, if we sum the top row, we get exactly what we want: $^{n-1} e$.



Now the question is, though, how can we compute this matrix power for fractional $n$ in some explicit form? It seems one possible way to do this, and the way that I saw when this method was suggested (by Gottfried Helms, who was here a long time ago, not sure if they're still so) was to try to diagonalize the matrix, so that you can use the fact that if a matrix $mathbf{A} = mathbf{P}mathbf{D}mathbf{P}^{-1}$ for some diagonal matrix $mathbf{D}$ (which happens to be the matrix of eigenvalues) then $mathbf{A}^t = mathbf{P}mathbf{D}^tmathbf{P}^{-1}$ where that the inner power is easy to compute as you just take exponents of the diagonal terms.



And numerically this seems to work for at least finite truncations of the matrix $mathbf{C}[exp]$, but analytically it may be on shaky ground, as I see with this thread here that I just saw when trying to dig into this once more:



Diagonalization of an infinite matrix



and moreover it's still not super efficient as we'd like - we're not computing 2500 digits of $^{1/2} e$ and I want them computed, dammit!



However, it seems that in this case some of the objections raised in that thread do not perhaps apply here. In particular, it is mentioned how that an infinite matrix diagonalization is ambiguous (seems to mirror the situation with the tetration interpolation generally where there is great freedom) due to choice of suitable normed vector space over which to make sense of it, and moreover in that question it was pointed that the usual most "natural" space, $ell^2$, did not work for that questioner's particular matrix, because in particular it would "map" most "vectors" of $ell^2$ to effectively outside of the space. However, this Carleman matrix seems better behaved - in particular, due to the fact that any vector $[ a_0 a_1 a_2 cdots ]^T in ell^2$ by definition must have terms that converge absolutely as an infinite sum, then that owing to the factorials in the matrix when we multiply by it we should also get an (even more) convergent series, as is illustrated by considering the bounding "vector" $[ 1 1 1 cdots ]^T$ as "formally" acted upon by $mathbf{C}[exp]$. So it seems that in that regard, we are in better shape against at least the objections raised in that thread in this case than for that poster's scenario.



Thus the question I have is, if we take the relevant target space as $ell^2$, can we find a "natural" infinite diagonalization and thus matrix-power for this case, and moreover express it somehow in terms of at least some sort of infinite combinatorial sums or otherwise easily-manipulated expressions for its coefficients?



Moreover, another interesting and seemingly natural question provoked by this is, exactly how sensitive is the method to the choice of at least norm used to interpret the matrix power, i.e. can we have absolute freedom to interpolate $^n e$ in any way we please, and if not, then just how much do we get? I suspect "a lot", but there are some "lots" that are more than others in mathematics, even when infinities are concerned, thanks to Cantor. And can we do the inverse - i.e. if I fill in $^{n} e$ with some freely-chosen interpolant in the interval $[0, 1]$ (for the purpose of making things easy I'll assume it's continuous and moreover equals $1$ at $x = 0$ and $e$ at $x = 1$) - can we find a norm such that the associated Carleman matrix power will produce that interpolant? Does it have to be analytic (seems right, but keep in mind that we are summing the top row, not necessarily creating a power series valid for all or even any inputs, though again it also "seems" right that if not analytic, it'll diverge)? If so, what's the proof?



ADD (epoch time 1545.46 Ms): In the quest for an at least summatory-formula shot at the diagonalization, I note this matrix has the interesting relations among rows and columns given by



$$C[exp]_{(i+1)j} = sum_{k=0}^{j} frac{1}{(j - k)!} C_{ik}$$



and



$$C[exp]_{i(j+1)} = frac{i}{j+1} C_{ij}$$



Not sure if this helps anything, though. But at least it shows there is structure and thus we're not just dealing with effectively purely random matrices and thus in theory it might somehow be exploited in some fashion to simplify things.



ADD 2 (same time): You should actually sum the second row of the matrix power to get the tetration $^n e$, not the first to get $^{n-1} e$. The first row always sums to 1.



ADD 3 (ue+1545.47 Ms): The first row formula above allows us to derive the interesting property that if $mathbf{C}[exp]$ is right-multiplied by the factorial matrix



$$mathbf{D} := begin{bmatrix} 1 & frac{1}{1!} & frac{1}{2!} & frac{1}{3!} & cdots \
0 & 1 & frac{1}{1!} & frac{1}{2!} & cdots \
0 & 0 & 1 & frac{1}{1!} & cdots \
0 & 0 & 0 & 1 & cdots \
& & cdots & & end{bmatrix}$$



where $D_{kj} = frac{1}{(j-k)!}$ (with negative-argument factorials "naturally" extended as $infty$ so these $D$ terms are $0$), it shifts up by one row.



ADD 4 (sim. time): It looks like we can move both up and down, in particular the matrix $mathbf{D}$ with entries $D_{kj} = (-1)^{k-j} frac{1}{(k-j)!}$ will shift down, while the other matrix above, perhaps better called $mathbf{U}$ instead will shift up. Shifting left and right is possible as well but via the Hadamard product and not the ordinary product, as the second set of relations between columns indicates. In particular, the Hadamard product with the matrix $mathbf{L}$ with entries $L_{ij} = frac{i}{j+1}$ will shift left, and the matrix $mathbf{R}$ with entries $R_{ij} = frac{j}{i}$ will shift right. Thus we have some interesting system for "moving" the matrix about like some kind of tableau or grid - not sure what these symmetry properties do though for easing the analytic solution/explicit formula of the matrixpower as a summation.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:52










  • $begingroup$
    (...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:58






  • 1




    $begingroup$
    @Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59










  • $begingroup$
    The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59








  • 1




    $begingroup$
    Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 11:12
















3












$begingroup$


I am curious about this. It has been a very long time since I have ever toyed with this topic but it was an old interest of mine quite some time ago - maybe 8 years (250 megaseconds) ago at least, and I never really got to a conclusion, nor do I think anyone did entirely satisfactorily, and some comments on a Youtube video I was watching inspired me to dust it off and try once more to give it another thwack.



And the question is, basically, how can one construct a "reasonable" interpolation of the "tetration" operation, also called a "power tower", which for those who have not heard of it is defined for natural $n > 1$ and real $a > 1$ by



$$^{n} a = a uparrow uparrow n := underbrace{a^{a^{a^{...^a}}}}_{mbox{$n$ copies of $a$}}$$



where the nested exponentiations on the left are evaluated in rightassociative fashion, so the deepest ("highest") layer is done first, e.g.



$$^{3} 3 = 3^{3^3} = 3^{27} = 7 625 597 484 987$$



and not left-associative, i.e.



$$^{3} 3 ne (3^3)^3 = 3^{3 cdot 3} = 3^9 = 19 683$$



What the "interpolation" part means is basically, given that this definition clearly only works for values of the second argument, $n$ (called as such by analogy with exponentiation even though it is written first in the "left-superscript" notation just introduced), usually called the "height" of the tetration or power tower for obvious reasons, that are natural numbers at least 1, since we have to have a whole number of "copies of $a$" - half a copy, say, wouldn't make much sense, as while you can literally write a half-written "$a$", that is formally nonsense and has no mathematical meaning, although it may have other forms of meaning from other angles of human understanding and analysis, e.g. perhaps as a form of artsey commentary. Mathematical meaning is, though, of course, what we're interested in. We can, of course, naturally extend this in a way similar to extending exponentiation to the integers by noting that



$$^{n+1} a = a^{^n a}$$



and thus



$$^{n-1} a = log_a(^n a)$$



and if we do this we can at least extend that $^0 a = 1$, similar to exponentiation, and $^{(-1)} a = 0$, a rather interesting result when viewed in contrast to exponentiation given that the first negative exponentiation of a number is not a constant but instead its reciprocal. Of course, we cannot extend now to $^{(-2)} a$, as then we get $log_a(0)$ which is undefined (though of course if you want to stretch the maths a bit and expand the codomain to the extended reals, you can say $^{(-2)}a = -infty$. In any case though, $^{(-3)} a$ and further are definitely, really undefined, since no real exponential can be negative, much less negative infinity!). So this peters out.



Of course, the most interesting bit - as hinted at with the "half a copy" business above - is trying to extend the height $n$ to real values, presumably in $(-2, infty)$ at least.



And there have been a number of methods fielded in those past epochs which attempt to do this as well as some interesting ones regarding the conditions which are required to produce a suitably "natural" extension, given that it is trivially obvious that one can, of course, "interpolate" a given sparse sequence of points in any way that one desires and, moreover, even with the identities $^{n+1} a = a^{^n a}$, they only suffice to make it unique insofar as whole-number increments of the tower are concerned - fill any unit interval with anything you like, and the identity will extend to provide an interpolant that will satisfy it. For exponentiation, this non-uniqueness is much less of a problem because we also have the additional identity $a^{n+m} = a^n a^m$, which lets us extend to rational values, however no such identity exists for tetration.



In this regard, extension of tetration is similar to the question of extension of the factorial, which is similarly impoverished of identities, with new ones interestingly only coming about after the extension was done by Euler in the form of the gamma function, to meet a challenge originally proposed by Bernoulli to do exactly this. The gamma function, however, is still ostensibly more "natural" simply because a) it often crops up and b) it has some very cute integral representations, esp. the darlin'



$$Gamma(x) = int_{0}^{1} [-log(u)]^{n-1} du$$



(Though with regard to objection a), one could say this may be simply because we have not yet found such an expression, and thus places where it might be useful, could be currently written off as "unsolvable".)



Yet clearly, that doesn't seem to have been the case for tetration, either. Moreover, in all these past discussions, many of the extension methods proposed are in fact extremely cumbersome and computationally intensive to approximate, involving elaborate constructs like Riemann mappings, infinite limits of integral equations, and so forth - all things that are, while mathematically valid, both inelegant and also not something you're going be able to program into a software pack like Mathematica and have it spit out 2500 digits of $^{1/2} 2$ in the blink of an eye.



But nonetheless, one particular method out of these proposed methods seems be both fairly simple and like that it might possible be amenable to more detailed analysis, and that is the "Carleman matrix" operator method.



This method is most succinctly expressed for the specific case $a = e$, to construct the "natural tetrational" $mathrm{tet}(x) := ^x e$ with real height $x$, so we'll just focus on that for now. But basically it is based on the following two observations. The first is that one can consider the result of the height-$n$ power tower of $e$ as the iterated exponential evaluated at $1$, namely



$$^n e = exp^n(1)$$



or perhaps more nicely for what we're about to do,



$$^{n-1} e = exp^n(0)$$



which has some interesting gamma-function like quality about it with the offset.



And the second one is the following. If we let $exp$ be given by its power series,



$$exp(x) = sum_{n=0}^{infty} frac{x^n}{n!}$$



then we can actually represent such iterated exponentials using what is called its Carleman matrix, basically the infinite-order "matrix" with entries



$$C[exp]_{ij} = frac{i^j}{j!}$$



such that if we have the infinite vector of exponential coefficients $mathbf{a} = left[ frac{1}{1!} frac{1}{2!} frac{1}{3!} cdots right]^T$ then the vector $mathbf{b}_n = (mathbf{C}[exp])^n mathbf{a}$ is the coefficients of $exp^n$. In particular, if we sum the top row, we get exactly what we want: $^{n-1} e$.



Now the question is, though, how can we compute this matrix power for fractional $n$ in some explicit form? It seems one possible way to do this, and the way that I saw when this method was suggested (by Gottfried Helms, who was here a long time ago, not sure if they're still so) was to try to diagonalize the matrix, so that you can use the fact that if a matrix $mathbf{A} = mathbf{P}mathbf{D}mathbf{P}^{-1}$ for some diagonal matrix $mathbf{D}$ (which happens to be the matrix of eigenvalues) then $mathbf{A}^t = mathbf{P}mathbf{D}^tmathbf{P}^{-1}$ where that the inner power is easy to compute as you just take exponents of the diagonal terms.



And numerically this seems to work for at least finite truncations of the matrix $mathbf{C}[exp]$, but analytically it may be on shaky ground, as I see with this thread here that I just saw when trying to dig into this once more:



Diagonalization of an infinite matrix



and moreover it's still not super efficient as we'd like - we're not computing 2500 digits of $^{1/2} e$ and I want them computed, dammit!



However, it seems that in this case some of the objections raised in that thread do not perhaps apply here. In particular, it is mentioned how that an infinite matrix diagonalization is ambiguous (seems to mirror the situation with the tetration interpolation generally where there is great freedom) due to choice of suitable normed vector space over which to make sense of it, and moreover in that question it was pointed that the usual most "natural" space, $ell^2$, did not work for that questioner's particular matrix, because in particular it would "map" most "vectors" of $ell^2$ to effectively outside of the space. However, this Carleman matrix seems better behaved - in particular, due to the fact that any vector $[ a_0 a_1 a_2 cdots ]^T in ell^2$ by definition must have terms that converge absolutely as an infinite sum, then that owing to the factorials in the matrix when we multiply by it we should also get an (even more) convergent series, as is illustrated by considering the bounding "vector" $[ 1 1 1 cdots ]^T$ as "formally" acted upon by $mathbf{C}[exp]$. So it seems that in that regard, we are in better shape against at least the objections raised in that thread in this case than for that poster's scenario.



Thus the question I have is, if we take the relevant target space as $ell^2$, can we find a "natural" infinite diagonalization and thus matrix-power for this case, and moreover express it somehow in terms of at least some sort of infinite combinatorial sums or otherwise easily-manipulated expressions for its coefficients?



Moreover, another interesting and seemingly natural question provoked by this is, exactly how sensitive is the method to the choice of at least norm used to interpret the matrix power, i.e. can we have absolute freedom to interpolate $^n e$ in any way we please, and if not, then just how much do we get? I suspect "a lot", but there are some "lots" that are more than others in mathematics, even when infinities are concerned, thanks to Cantor. And can we do the inverse - i.e. if I fill in $^{n} e$ with some freely-chosen interpolant in the interval $[0, 1]$ (for the purpose of making things easy I'll assume it's continuous and moreover equals $1$ at $x = 0$ and $e$ at $x = 1$) - can we find a norm such that the associated Carleman matrix power will produce that interpolant? Does it have to be analytic (seems right, but keep in mind that we are summing the top row, not necessarily creating a power series valid for all or even any inputs, though again it also "seems" right that if not analytic, it'll diverge)? If so, what's the proof?



ADD (epoch time 1545.46 Ms): In the quest for an at least summatory-formula shot at the diagonalization, I note this matrix has the interesting relations among rows and columns given by



$$C[exp]_{(i+1)j} = sum_{k=0}^{j} frac{1}{(j - k)!} C_{ik}$$



and



$$C[exp]_{i(j+1)} = frac{i}{j+1} C_{ij}$$



Not sure if this helps anything, though. But at least it shows there is structure and thus we're not just dealing with effectively purely random matrices and thus in theory it might somehow be exploited in some fashion to simplify things.



ADD 2 (same time): You should actually sum the second row of the matrix power to get the tetration $^n e$, not the first to get $^{n-1} e$. The first row always sums to 1.



ADD 3 (ue+1545.47 Ms): The first row formula above allows us to derive the interesting property that if $mathbf{C}[exp]$ is right-multiplied by the factorial matrix



$$mathbf{D} := begin{bmatrix} 1 & frac{1}{1!} & frac{1}{2!} & frac{1}{3!} & cdots \
0 & 1 & frac{1}{1!} & frac{1}{2!} & cdots \
0 & 0 & 1 & frac{1}{1!} & cdots \
0 & 0 & 0 & 1 & cdots \
& & cdots & & end{bmatrix}$$



where $D_{kj} = frac{1}{(j-k)!}$ (with negative-argument factorials "naturally" extended as $infty$ so these $D$ terms are $0$), it shifts up by one row.



ADD 4 (sim. time): It looks like we can move both up and down, in particular the matrix $mathbf{D}$ with entries $D_{kj} = (-1)^{k-j} frac{1}{(k-j)!}$ will shift down, while the other matrix above, perhaps better called $mathbf{U}$ instead will shift up. Shifting left and right is possible as well but via the Hadamard product and not the ordinary product, as the second set of relations between columns indicates. In particular, the Hadamard product with the matrix $mathbf{L}$ with entries $L_{ij} = frac{i}{j+1}$ will shift left, and the matrix $mathbf{R}$ with entries $R_{ij} = frac{j}{i}$ will shift right. Thus we have some interesting system for "moving" the matrix about like some kind of tableau or grid - not sure what these symmetry properties do though for easing the analytic solution/explicit formula of the matrixpower as a summation.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:52










  • $begingroup$
    (...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:58






  • 1




    $begingroup$
    @Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59










  • $begingroup$
    The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59








  • 1




    $begingroup$
    Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 11:12














3












3








3





$begingroup$


I am curious about this. It has been a very long time since I have ever toyed with this topic but it was an old interest of mine quite some time ago - maybe 8 years (250 megaseconds) ago at least, and I never really got to a conclusion, nor do I think anyone did entirely satisfactorily, and some comments on a Youtube video I was watching inspired me to dust it off and try once more to give it another thwack.



And the question is, basically, how can one construct a "reasonable" interpolation of the "tetration" operation, also called a "power tower", which for those who have not heard of it is defined for natural $n > 1$ and real $a > 1$ by



$$^{n} a = a uparrow uparrow n := underbrace{a^{a^{a^{...^a}}}}_{mbox{$n$ copies of $a$}}$$



where the nested exponentiations on the left are evaluated in rightassociative fashion, so the deepest ("highest") layer is done first, e.g.



$$^{3} 3 = 3^{3^3} = 3^{27} = 7 625 597 484 987$$



and not left-associative, i.e.



$$^{3} 3 ne (3^3)^3 = 3^{3 cdot 3} = 3^9 = 19 683$$



What the "interpolation" part means is basically, given that this definition clearly only works for values of the second argument, $n$ (called as such by analogy with exponentiation even though it is written first in the "left-superscript" notation just introduced), usually called the "height" of the tetration or power tower for obvious reasons, that are natural numbers at least 1, since we have to have a whole number of "copies of $a$" - half a copy, say, wouldn't make much sense, as while you can literally write a half-written "$a$", that is formally nonsense and has no mathematical meaning, although it may have other forms of meaning from other angles of human understanding and analysis, e.g. perhaps as a form of artsey commentary. Mathematical meaning is, though, of course, what we're interested in. We can, of course, naturally extend this in a way similar to extending exponentiation to the integers by noting that



$$^{n+1} a = a^{^n a}$$



and thus



$$^{n-1} a = log_a(^n a)$$



and if we do this we can at least extend that $^0 a = 1$, similar to exponentiation, and $^{(-1)} a = 0$, a rather interesting result when viewed in contrast to exponentiation given that the first negative exponentiation of a number is not a constant but instead its reciprocal. Of course, we cannot extend now to $^{(-2)} a$, as then we get $log_a(0)$ which is undefined (though of course if you want to stretch the maths a bit and expand the codomain to the extended reals, you can say $^{(-2)}a = -infty$. In any case though, $^{(-3)} a$ and further are definitely, really undefined, since no real exponential can be negative, much less negative infinity!). So this peters out.



Of course, the most interesting bit - as hinted at with the "half a copy" business above - is trying to extend the height $n$ to real values, presumably in $(-2, infty)$ at least.



And there have been a number of methods fielded in those past epochs which attempt to do this as well as some interesting ones regarding the conditions which are required to produce a suitably "natural" extension, given that it is trivially obvious that one can, of course, "interpolate" a given sparse sequence of points in any way that one desires and, moreover, even with the identities $^{n+1} a = a^{^n a}$, they only suffice to make it unique insofar as whole-number increments of the tower are concerned - fill any unit interval with anything you like, and the identity will extend to provide an interpolant that will satisfy it. For exponentiation, this non-uniqueness is much less of a problem because we also have the additional identity $a^{n+m} = a^n a^m$, which lets us extend to rational values, however no such identity exists for tetration.



In this regard, extension of tetration is similar to the question of extension of the factorial, which is similarly impoverished of identities, with new ones interestingly only coming about after the extension was done by Euler in the form of the gamma function, to meet a challenge originally proposed by Bernoulli to do exactly this. The gamma function, however, is still ostensibly more "natural" simply because a) it often crops up and b) it has some very cute integral representations, esp. the darlin'



$$Gamma(x) = int_{0}^{1} [-log(u)]^{n-1} du$$



(Though with regard to objection a), one could say this may be simply because we have not yet found such an expression, and thus places where it might be useful, could be currently written off as "unsolvable".)



Yet clearly, that doesn't seem to have been the case for tetration, either. Moreover, in all these past discussions, many of the extension methods proposed are in fact extremely cumbersome and computationally intensive to approximate, involving elaborate constructs like Riemann mappings, infinite limits of integral equations, and so forth - all things that are, while mathematically valid, both inelegant and also not something you're going be able to program into a software pack like Mathematica and have it spit out 2500 digits of $^{1/2} 2$ in the blink of an eye.



But nonetheless, one particular method out of these proposed methods seems be both fairly simple and like that it might possible be amenable to more detailed analysis, and that is the "Carleman matrix" operator method.



This method is most succinctly expressed for the specific case $a = e$, to construct the "natural tetrational" $mathrm{tet}(x) := ^x e$ with real height $x$, so we'll just focus on that for now. But basically it is based on the following two observations. The first is that one can consider the result of the height-$n$ power tower of $e$ as the iterated exponential evaluated at $1$, namely



$$^n e = exp^n(1)$$



or perhaps more nicely for what we're about to do,



$$^{n-1} e = exp^n(0)$$



which has some interesting gamma-function like quality about it with the offset.



And the second one is the following. If we let $exp$ be given by its power series,



$$exp(x) = sum_{n=0}^{infty} frac{x^n}{n!}$$



then we can actually represent such iterated exponentials using what is called its Carleman matrix, basically the infinite-order "matrix" with entries



$$C[exp]_{ij} = frac{i^j}{j!}$$



such that if we have the infinite vector of exponential coefficients $mathbf{a} = left[ frac{1}{1!} frac{1}{2!} frac{1}{3!} cdots right]^T$ then the vector $mathbf{b}_n = (mathbf{C}[exp])^n mathbf{a}$ is the coefficients of $exp^n$. In particular, if we sum the top row, we get exactly what we want: $^{n-1} e$.



Now the question is, though, how can we compute this matrix power for fractional $n$ in some explicit form? It seems one possible way to do this, and the way that I saw when this method was suggested (by Gottfried Helms, who was here a long time ago, not sure if they're still so) was to try to diagonalize the matrix, so that you can use the fact that if a matrix $mathbf{A} = mathbf{P}mathbf{D}mathbf{P}^{-1}$ for some diagonal matrix $mathbf{D}$ (which happens to be the matrix of eigenvalues) then $mathbf{A}^t = mathbf{P}mathbf{D}^tmathbf{P}^{-1}$ where that the inner power is easy to compute as you just take exponents of the diagonal terms.



And numerically this seems to work for at least finite truncations of the matrix $mathbf{C}[exp]$, but analytically it may be on shaky ground, as I see with this thread here that I just saw when trying to dig into this once more:



Diagonalization of an infinite matrix



and moreover it's still not super efficient as we'd like - we're not computing 2500 digits of $^{1/2} e$ and I want them computed, dammit!



However, it seems that in this case some of the objections raised in that thread do not perhaps apply here. In particular, it is mentioned how that an infinite matrix diagonalization is ambiguous (seems to mirror the situation with the tetration interpolation generally where there is great freedom) due to choice of suitable normed vector space over which to make sense of it, and moreover in that question it was pointed that the usual most "natural" space, $ell^2$, did not work for that questioner's particular matrix, because in particular it would "map" most "vectors" of $ell^2$ to effectively outside of the space. However, this Carleman matrix seems better behaved - in particular, due to the fact that any vector $[ a_0 a_1 a_2 cdots ]^T in ell^2$ by definition must have terms that converge absolutely as an infinite sum, then that owing to the factorials in the matrix when we multiply by it we should also get an (even more) convergent series, as is illustrated by considering the bounding "vector" $[ 1 1 1 cdots ]^T$ as "formally" acted upon by $mathbf{C}[exp]$. So it seems that in that regard, we are in better shape against at least the objections raised in that thread in this case than for that poster's scenario.



Thus the question I have is, if we take the relevant target space as $ell^2$, can we find a "natural" infinite diagonalization and thus matrix-power for this case, and moreover express it somehow in terms of at least some sort of infinite combinatorial sums or otherwise easily-manipulated expressions for its coefficients?



Moreover, another interesting and seemingly natural question provoked by this is, exactly how sensitive is the method to the choice of at least norm used to interpret the matrix power, i.e. can we have absolute freedom to interpolate $^n e$ in any way we please, and if not, then just how much do we get? I suspect "a lot", but there are some "lots" that are more than others in mathematics, even when infinities are concerned, thanks to Cantor. And can we do the inverse - i.e. if I fill in $^{n} e$ with some freely-chosen interpolant in the interval $[0, 1]$ (for the purpose of making things easy I'll assume it's continuous and moreover equals $1$ at $x = 0$ and $e$ at $x = 1$) - can we find a norm such that the associated Carleman matrix power will produce that interpolant? Does it have to be analytic (seems right, but keep in mind that we are summing the top row, not necessarily creating a power series valid for all or even any inputs, though again it also "seems" right that if not analytic, it'll diverge)? If so, what's the proof?



ADD (epoch time 1545.46 Ms): In the quest for an at least summatory-formula shot at the diagonalization, I note this matrix has the interesting relations among rows and columns given by



$$C[exp]_{(i+1)j} = sum_{k=0}^{j} frac{1}{(j - k)!} C_{ik}$$



and



$$C[exp]_{i(j+1)} = frac{i}{j+1} C_{ij}$$



Not sure if this helps anything, though. But at least it shows there is structure and thus we're not just dealing with effectively purely random matrices and thus in theory it might somehow be exploited in some fashion to simplify things.



ADD 2 (same time): You should actually sum the second row of the matrix power to get the tetration $^n e$, not the first to get $^{n-1} e$. The first row always sums to 1.



ADD 3 (ue+1545.47 Ms): The first row formula above allows us to derive the interesting property that if $mathbf{C}[exp]$ is right-multiplied by the factorial matrix



$$mathbf{D} := begin{bmatrix} 1 & frac{1}{1!} & frac{1}{2!} & frac{1}{3!} & cdots \
0 & 1 & frac{1}{1!} & frac{1}{2!} & cdots \
0 & 0 & 1 & frac{1}{1!} & cdots \
0 & 0 & 0 & 1 & cdots \
& & cdots & & end{bmatrix}$$



where $D_{kj} = frac{1}{(j-k)!}$ (with negative-argument factorials "naturally" extended as $infty$ so these $D$ terms are $0$), it shifts up by one row.



ADD 4 (sim. time): It looks like we can move both up and down, in particular the matrix $mathbf{D}$ with entries $D_{kj} = (-1)^{k-j} frac{1}{(k-j)!}$ will shift down, while the other matrix above, perhaps better called $mathbf{U}$ instead will shift up. Shifting left and right is possible as well but via the Hadamard product and not the ordinary product, as the second set of relations between columns indicates. In particular, the Hadamard product with the matrix $mathbf{L}$ with entries $L_{ij} = frac{i}{j+1}$ will shift left, and the matrix $mathbf{R}$ with entries $R_{ij} = frac{j}{i}$ will shift right. Thus we have some interesting system for "moving" the matrix about like some kind of tableau or grid - not sure what these symmetry properties do though for easing the analytic solution/explicit formula of the matrixpower as a summation.










share|cite|improve this question











$endgroup$




I am curious about this. It has been a very long time since I have ever toyed with this topic but it was an old interest of mine quite some time ago - maybe 8 years (250 megaseconds) ago at least, and I never really got to a conclusion, nor do I think anyone did entirely satisfactorily, and some comments on a Youtube video I was watching inspired me to dust it off and try once more to give it another thwack.



And the question is, basically, how can one construct a "reasonable" interpolation of the "tetration" operation, also called a "power tower", which for those who have not heard of it is defined for natural $n > 1$ and real $a > 1$ by



$$^{n} a = a uparrow uparrow n := underbrace{a^{a^{a^{...^a}}}}_{mbox{$n$ copies of $a$}}$$



where the nested exponentiations on the left are evaluated in rightassociative fashion, so the deepest ("highest") layer is done first, e.g.



$$^{3} 3 = 3^{3^3} = 3^{27} = 7 625 597 484 987$$



and not left-associative, i.e.



$$^{3} 3 ne (3^3)^3 = 3^{3 cdot 3} = 3^9 = 19 683$$



What the "interpolation" part means is basically, given that this definition clearly only works for values of the second argument, $n$ (called as such by analogy with exponentiation even though it is written first in the "left-superscript" notation just introduced), usually called the "height" of the tetration or power tower for obvious reasons, that are natural numbers at least 1, since we have to have a whole number of "copies of $a$" - half a copy, say, wouldn't make much sense, as while you can literally write a half-written "$a$", that is formally nonsense and has no mathematical meaning, although it may have other forms of meaning from other angles of human understanding and analysis, e.g. perhaps as a form of artsey commentary. Mathematical meaning is, though, of course, what we're interested in. We can, of course, naturally extend this in a way similar to extending exponentiation to the integers by noting that



$$^{n+1} a = a^{^n a}$$



and thus



$$^{n-1} a = log_a(^n a)$$



and if we do this we can at least extend that $^0 a = 1$, similar to exponentiation, and $^{(-1)} a = 0$, a rather interesting result when viewed in contrast to exponentiation given that the first negative exponentiation of a number is not a constant but instead its reciprocal. Of course, we cannot extend now to $^{(-2)} a$, as then we get $log_a(0)$ which is undefined (though of course if you want to stretch the maths a bit and expand the codomain to the extended reals, you can say $^{(-2)}a = -infty$. In any case though, $^{(-3)} a$ and further are definitely, really undefined, since no real exponential can be negative, much less negative infinity!). So this peters out.



Of course, the most interesting bit - as hinted at with the "half a copy" business above - is trying to extend the height $n$ to real values, presumably in $(-2, infty)$ at least.



And there have been a number of methods fielded in those past epochs which attempt to do this as well as some interesting ones regarding the conditions which are required to produce a suitably "natural" extension, given that it is trivially obvious that one can, of course, "interpolate" a given sparse sequence of points in any way that one desires and, moreover, even with the identities $^{n+1} a = a^{^n a}$, they only suffice to make it unique insofar as whole-number increments of the tower are concerned - fill any unit interval with anything you like, and the identity will extend to provide an interpolant that will satisfy it. For exponentiation, this non-uniqueness is much less of a problem because we also have the additional identity $a^{n+m} = a^n a^m$, which lets us extend to rational values, however no such identity exists for tetration.



In this regard, extension of tetration is similar to the question of extension of the factorial, which is similarly impoverished of identities, with new ones interestingly only coming about after the extension was done by Euler in the form of the gamma function, to meet a challenge originally proposed by Bernoulli to do exactly this. The gamma function, however, is still ostensibly more "natural" simply because a) it often crops up and b) it has some very cute integral representations, esp. the darlin'



$$Gamma(x) = int_{0}^{1} [-log(u)]^{n-1} du$$



(Though with regard to objection a), one could say this may be simply because we have not yet found such an expression, and thus places where it might be useful, could be currently written off as "unsolvable".)



Yet clearly, that doesn't seem to have been the case for tetration, either. Moreover, in all these past discussions, many of the extension methods proposed are in fact extremely cumbersome and computationally intensive to approximate, involving elaborate constructs like Riemann mappings, infinite limits of integral equations, and so forth - all things that are, while mathematically valid, both inelegant and also not something you're going be able to program into a software pack like Mathematica and have it spit out 2500 digits of $^{1/2} 2$ in the blink of an eye.



But nonetheless, one particular method out of these proposed methods seems be both fairly simple and like that it might possible be amenable to more detailed analysis, and that is the "Carleman matrix" operator method.



This method is most succinctly expressed for the specific case $a = e$, to construct the "natural tetrational" $mathrm{tet}(x) := ^x e$ with real height $x$, so we'll just focus on that for now. But basically it is based on the following two observations. The first is that one can consider the result of the height-$n$ power tower of $e$ as the iterated exponential evaluated at $1$, namely



$$^n e = exp^n(1)$$



or perhaps more nicely for what we're about to do,



$$^{n-1} e = exp^n(0)$$



which has some interesting gamma-function like quality about it with the offset.



And the second one is the following. If we let $exp$ be given by its power series,



$$exp(x) = sum_{n=0}^{infty} frac{x^n}{n!}$$



then we can actually represent such iterated exponentials using what is called its Carleman matrix, basically the infinite-order "matrix" with entries



$$C[exp]_{ij} = frac{i^j}{j!}$$



such that if we have the infinite vector of exponential coefficients $mathbf{a} = left[ frac{1}{1!} frac{1}{2!} frac{1}{3!} cdots right]^T$ then the vector $mathbf{b}_n = (mathbf{C}[exp])^n mathbf{a}$ is the coefficients of $exp^n$. In particular, if we sum the top row, we get exactly what we want: $^{n-1} e$.



Now the question is, though, how can we compute this matrix power for fractional $n$ in some explicit form? It seems one possible way to do this, and the way that I saw when this method was suggested (by Gottfried Helms, who was here a long time ago, not sure if they're still so) was to try to diagonalize the matrix, so that you can use the fact that if a matrix $mathbf{A} = mathbf{P}mathbf{D}mathbf{P}^{-1}$ for some diagonal matrix $mathbf{D}$ (which happens to be the matrix of eigenvalues) then $mathbf{A}^t = mathbf{P}mathbf{D}^tmathbf{P}^{-1}$ where that the inner power is easy to compute as you just take exponents of the diagonal terms.



And numerically this seems to work for at least finite truncations of the matrix $mathbf{C}[exp]$, but analytically it may be on shaky ground, as I see with this thread here that I just saw when trying to dig into this once more:



Diagonalization of an infinite matrix



and moreover it's still not super efficient as we'd like - we're not computing 2500 digits of $^{1/2} e$ and I want them computed, dammit!



However, it seems that in this case some of the objections raised in that thread do not perhaps apply here. In particular, it is mentioned how that an infinite matrix diagonalization is ambiguous (seems to mirror the situation with the tetration interpolation generally where there is great freedom) due to choice of suitable normed vector space over which to make sense of it, and moreover in that question it was pointed that the usual most "natural" space, $ell^2$, did not work for that questioner's particular matrix, because in particular it would "map" most "vectors" of $ell^2$ to effectively outside of the space. However, this Carleman matrix seems better behaved - in particular, due to the fact that any vector $[ a_0 a_1 a_2 cdots ]^T in ell^2$ by definition must have terms that converge absolutely as an infinite sum, then that owing to the factorials in the matrix when we multiply by it we should also get an (even more) convergent series, as is illustrated by considering the bounding "vector" $[ 1 1 1 cdots ]^T$ as "formally" acted upon by $mathbf{C}[exp]$. So it seems that in that regard, we are in better shape against at least the objections raised in that thread in this case than for that poster's scenario.



Thus the question I have is, if we take the relevant target space as $ell^2$, can we find a "natural" infinite diagonalization and thus matrix-power for this case, and moreover express it somehow in terms of at least some sort of infinite combinatorial sums or otherwise easily-manipulated expressions for its coefficients?



Moreover, another interesting and seemingly natural question provoked by this is, exactly how sensitive is the method to the choice of at least norm used to interpret the matrix power, i.e. can we have absolute freedom to interpolate $^n e$ in any way we please, and if not, then just how much do we get? I suspect "a lot", but there are some "lots" that are more than others in mathematics, even when infinities are concerned, thanks to Cantor. And can we do the inverse - i.e. if I fill in $^{n} e$ with some freely-chosen interpolant in the interval $[0, 1]$ (for the purpose of making things easy I'll assume it's continuous and moreover equals $1$ at $x = 0$ and $e$ at $x = 1$) - can we find a norm such that the associated Carleman matrix power will produce that interpolant? Does it have to be analytic (seems right, but keep in mind that we are summing the top row, not necessarily creating a power series valid for all or even any inputs, though again it also "seems" right that if not analytic, it'll diverge)? If so, what's the proof?



ADD (epoch time 1545.46 Ms): In the quest for an at least summatory-formula shot at the diagonalization, I note this matrix has the interesting relations among rows and columns given by



$$C[exp]_{(i+1)j} = sum_{k=0}^{j} frac{1}{(j - k)!} C_{ik}$$



and



$$C[exp]_{i(j+1)} = frac{i}{j+1} C_{ij}$$



Not sure if this helps anything, though. But at least it shows there is structure and thus we're not just dealing with effectively purely random matrices and thus in theory it might somehow be exploited in some fashion to simplify things.



ADD 2 (same time): You should actually sum the second row of the matrix power to get the tetration $^n e$, not the first to get $^{n-1} e$. The first row always sums to 1.



ADD 3 (ue+1545.47 Ms): The first row formula above allows us to derive the interesting property that if $mathbf{C}[exp]$ is right-multiplied by the factorial matrix



$$mathbf{D} := begin{bmatrix} 1 & frac{1}{1!} & frac{1}{2!} & frac{1}{3!} & cdots \
0 & 1 & frac{1}{1!} & frac{1}{2!} & cdots \
0 & 0 & 1 & frac{1}{1!} & cdots \
0 & 0 & 0 & 1 & cdots \
& & cdots & & end{bmatrix}$$



where $D_{kj} = frac{1}{(j-k)!}$ (with negative-argument factorials "naturally" extended as $infty$ so these $D$ terms are $0$), it shifts up by one row.



ADD 4 (sim. time): It looks like we can move both up and down, in particular the matrix $mathbf{D}$ with entries $D_{kj} = (-1)^{k-j} frac{1}{(k-j)!}$ will shift down, while the other matrix above, perhaps better called $mathbf{U}$ instead will shift up. Shifting left and right is possible as well but via the Hadamard product and not the ordinary product, as the second set of relations between columns indicates. In particular, the Hadamard product with the matrix $mathbf{L}$ with entries $L_{ij} = frac{i}{j+1}$ will shift left, and the matrix $mathbf{R}$ with entries $R_{ij} = frac{j}{i}$ will shift right. Thus we have some interesting system for "moving" the matrix about like some kind of tableau or grid - not sure what these symmetry properties do though for easing the analytic solution/explicit formula of the matrixpower as a summation.







real-analysis linear-algebra combinatorics functional-analysis tetration






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 22 '18 at 11:06







The_Sympathizer

















asked Dec 22 '18 at 6:22









The_SympathizerThe_Sympathizer

7,7102245




7,7102245








  • 1




    $begingroup$
    By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:52










  • $begingroup$
    (...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:58






  • 1




    $begingroup$
    @Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59










  • $begingroup$
    The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59








  • 1




    $begingroup$
    Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 11:12














  • 1




    $begingroup$
    By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:52










  • $begingroup$
    (...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 10:58






  • 1




    $begingroup$
    @Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59










  • $begingroup$
    The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 10:59








  • 1




    $begingroup$
    Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 11:12








1




1




$begingroup$
By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
$endgroup$
– Gottfried Helms
Dec 22 '18 at 10:52




$begingroup$
By my numerical tests, the method using truncated Carleman-matrices and try to approach to a sensical result by increasing the size of the matrix, it seems as if we implement an approximation to the Kneser method. The unfortunate thing is that the diagonalizations lead to sets of eigenvalues which vary much with the matrix-size, so for instance use of a 64x64-matrix approximated the Kneser-solution to some error of 1e-6 or so. The advantage(?) of the method is to have a real-to-real approximation for real fractional "heights". One other method (...)
$endgroup$
– Gottfried Helms
Dec 22 '18 at 10:52












$begingroup$
(...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
$endgroup$
– Gottfried Helms
Dec 22 '18 at 10:58




$begingroup$
(...) One other method tries to do the analysis with the ansatz of the Schröder-mechanism which introduces power series for the exponentiation shifted around the attracting or repelling fixpoint. But this works nicely only with bases $b$ from the range $1 lt b lt e^{1/e} $, for general bases we get power series/Carlemanmatrices with complex coefficients and difficult to evaluate for fractional heights - and in the general case, complex values even for real fractional heights. I've compared 5 basic methods in a small essay go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf
$endgroup$
– Gottfried Helms
Dec 22 '18 at 10:58




1




1




$begingroup$
@Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 10:59




$begingroup$
@Gottfried Helms : Yupp! What I'm trying to do is see if we can't break it down analytically to find a combinatorial formula (infinite summation) for the Kneser tetration, which may enable both its fast and efficient evaluation and its mathematical analysis for interesting properties and relations to other areas of maths. However that's a bit discouraging, since it suggests that a direct attack by diagonalization may not work in the infinite case even if we can fangle some kind of formula.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 10:59












$begingroup$
The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
$endgroup$
– The_Sympathizer
Dec 22 '18 at 10:59






$begingroup$
The coefficient seem so nice and simple that it feels as if there's almost gotta be some kind of explicit form, but I don't know, it's no proof... (or disproof, either :((( )
$endgroup$
– The_Sympathizer
Dec 22 '18 at 10:59






1




1




$begingroup$
Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
$endgroup$
– Gottfried Helms
Dec 22 '18 at 11:12




$begingroup$
Last comment at the moment. At the problem of analytical diagonalization of the infinite Carlemanmatrix: I've really invested much time to do this - found an analytical expression via the $exp(x)-1$ case and then seeing that this was identical to the Schröder-solution (without knowing it at all and being pointed to it only later). There might be one more possible alternative path here. I've read an article of Eri Jabotinsky who extended the Carlemanmatrix to the twoside infinite index and thus including the $f(x)^{-k}$-case which reminded me of Ramanujan-integrals. But I couldn't get in...
$endgroup$
– Gottfried Helms
Dec 22 '18 at 11:12










1 Answer
1






active

oldest

votes


















1












$begingroup$

Just a short answer what I've found concerning to express analytically the matrix-diagonalization for $C[exp]$ which I called $B$ for $exp(x)$ with base $b=e$ and $B_b$ for the general $b^x$ .

Using the notation $V(x)=[1,x,x^2,x^3,...]$ for the infinite "Vandermonde" row-vector of consecutive powers of $x$, $P$ for the upper-triangular Pascalmatrix, $S2$ for the lower-triangular matrix of Stirling-numbers 2'nd kind, $F$ and $f=F^{-1}$ for the diagonal matrix of factorials and its inverse we can write
$$ V(x) cdot B =V(e^x) $$
By simple $LDU$-decomposition of $B$ we find
$$ begin{array}{}
&B = (f cdot S2 cdot F) cdot P \
V(x) cdot ( (f cdot S2 cdot F) cdot P) &= V(e^x) \
end{array} $$

What I now found (by numerical tests inspired by some hypothese) was that a pair of suitable powers of $P$ would triangularize $B$:
$$ begin{array}{}
B &= P^{-t} cdot U_t cdot P^{t} \
end{array} $$

where $t$ is the (complex) fixpoint of $exp()$ and $U_t$ was lower triangular. Let's write $u$ for $u=log(t)$ and $;^dV(cdot)$ for the diagonally written $V(cdot)$. Then this is simply
$$ U_t = ;^dV(u) cdot fS2F $$
This can be diagonalized (but with complex coefficients if $t$ or more precisely $u (=log(t))$ is complex as in our current example)
$$ begin{array}{}
U_t &= W_t cdot ;^dV(u) cdot W_t^{-1} \
end{array} $$

So all in all we could write
$$ begin{array}{}
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) cdot P^{t}) &= V(e^x) \
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) cdot P^{t}) &= V(exp°^h(x)) \ text{ and conveniantly } \
(V(x) cdot P^{-t}) cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) &= V(exp°^h(x))cdot P^{-t} \ end{array} $$

Ironically, by the functionality of $P$ and its powers $P^{-t}$ we have $V(x) cdot P^{-t} = V(x-t) $ and our formula reduces by further cosmetic to
$$ begin{array}{}
V(x-t) cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) &= V(e^x-t) \
end{array} $$

and one recognizes the mechanism of shift of the powerseries towards its fixpoint $t$! It needed not much time to recognize that $W_t$ is just the Carlemanmatrix for the Schröder-function, having complex coefficients (in our current example), thus this all gives complex resulting values for fractional real heights on real startingvalues $x$.




This all is simpler (and has only real numbers) if the fixpoint $t$ is real, and so the base $b$ is $1 lt b lt e^{1/e}$ thus $1 lt t lt e$ and $ 0 lt u (=log(t)) lt 1$. For instance, if $b=sqrt{2}$ then $t=2$ and $u=log(2) approx 0.693$ then also $W_t$ has only real coefficients.
Moreover, all coefficients in $U_t = W_t cdot ;^dV(u^h)cdot W_t^{-1}$ (where $h$ is the iteration-height, possibly fractional) are real and are descibable in terms of polynomials in $u$ and $h$, with the order depending of the index in the powerseries (see text, see index-of-pages)

Additional remark: W. Reshetnikow in mathoverflow presented this year(don't have the link at hand) an ansatz which avoided the Carleman-matrix-method introducing essentially the $q$-factorials and -binomials. This made it really good looking! However, by my own earlier analyses I had found out that this leads to the same polynomials in $u$ and $h$ as I had found them via the just described analysis of the infinite Carlemanmatrices.

So the analytical access via infinite Carlemanmatrices and their diagonalization leads naturally to the Schröder-function and the Schröder-mechanism (including shift towards fixpoints) - which is, what you did not want...






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:48










  • $begingroup$
    @The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:51










  • $begingroup$
    It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:52










  • $begingroup$
    Though now I'm not sure...
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:53










  • $begingroup$
    Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:57













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049164%2fcan-we-obtain-an-explicit-and-efficient-analytic-interpolation-of-tetration-by-t%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Just a short answer what I've found concerning to express analytically the matrix-diagonalization for $C[exp]$ which I called $B$ for $exp(x)$ with base $b=e$ and $B_b$ for the general $b^x$ .

Using the notation $V(x)=[1,x,x^2,x^3,...]$ for the infinite "Vandermonde" row-vector of consecutive powers of $x$, $P$ for the upper-triangular Pascalmatrix, $S2$ for the lower-triangular matrix of Stirling-numbers 2'nd kind, $F$ and $f=F^{-1}$ for the diagonal matrix of factorials and its inverse we can write
$$ V(x) cdot B =V(e^x) $$
By simple $LDU$-decomposition of $B$ we find
$$ begin{array}{}
&B = (f cdot S2 cdot F) cdot P \
V(x) cdot ( (f cdot S2 cdot F) cdot P) &= V(e^x) \
end{array} $$

What I now found (by numerical tests inspired by some hypothese) was that a pair of suitable powers of $P$ would triangularize $B$:
$$ begin{array}{}
B &= P^{-t} cdot U_t cdot P^{t} \
end{array} $$

where $t$ is the (complex) fixpoint of $exp()$ and $U_t$ was lower triangular. Let's write $u$ for $u=log(t)$ and $;^dV(cdot)$ for the diagonally written $V(cdot)$. Then this is simply
$$ U_t = ;^dV(u) cdot fS2F $$
This can be diagonalized (but with complex coefficients if $t$ or more precisely $u (=log(t))$ is complex as in our current example)
$$ begin{array}{}
U_t &= W_t cdot ;^dV(u) cdot W_t^{-1} \
end{array} $$

So all in all we could write
$$ begin{array}{}
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) cdot P^{t}) &= V(e^x) \
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) cdot P^{t}) &= V(exp°^h(x)) \ text{ and conveniantly } \
(V(x) cdot P^{-t}) cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) &= V(exp°^h(x))cdot P^{-t} \ end{array} $$

Ironically, by the functionality of $P$ and its powers $P^{-t}$ we have $V(x) cdot P^{-t} = V(x-t) $ and our formula reduces by further cosmetic to
$$ begin{array}{}
V(x-t) cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) &= V(e^x-t) \
end{array} $$

and one recognizes the mechanism of shift of the powerseries towards its fixpoint $t$! It needed not much time to recognize that $W_t$ is just the Carlemanmatrix for the Schröder-function, having complex coefficients (in our current example), thus this all gives complex resulting values for fractional real heights on real startingvalues $x$.




This all is simpler (and has only real numbers) if the fixpoint $t$ is real, and so the base $b$ is $1 lt b lt e^{1/e}$ thus $1 lt t lt e$ and $ 0 lt u (=log(t)) lt 1$. For instance, if $b=sqrt{2}$ then $t=2$ and $u=log(2) approx 0.693$ then also $W_t$ has only real coefficients.
Moreover, all coefficients in $U_t = W_t cdot ;^dV(u^h)cdot W_t^{-1}$ (where $h$ is the iteration-height, possibly fractional) are real and are descibable in terms of polynomials in $u$ and $h$, with the order depending of the index in the powerseries (see text, see index-of-pages)

Additional remark: W. Reshetnikow in mathoverflow presented this year(don't have the link at hand) an ansatz which avoided the Carleman-matrix-method introducing essentially the $q$-factorials and -binomials. This made it really good looking! However, by my own earlier analyses I had found out that this leads to the same polynomials in $u$ and $h$ as I had found them via the just described analysis of the infinite Carlemanmatrices.

So the analytical access via infinite Carlemanmatrices and their diagonalization leads naturally to the Schröder-function and the Schröder-mechanism (including shift towards fixpoints) - which is, what you did not want...






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:48










  • $begingroup$
    @The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:51










  • $begingroup$
    It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:52










  • $begingroup$
    Though now I'm not sure...
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:53










  • $begingroup$
    Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:57


















1












$begingroup$

Just a short answer what I've found concerning to express analytically the matrix-diagonalization for $C[exp]$ which I called $B$ for $exp(x)$ with base $b=e$ and $B_b$ for the general $b^x$ .

Using the notation $V(x)=[1,x,x^2,x^3,...]$ for the infinite "Vandermonde" row-vector of consecutive powers of $x$, $P$ for the upper-triangular Pascalmatrix, $S2$ for the lower-triangular matrix of Stirling-numbers 2'nd kind, $F$ and $f=F^{-1}$ for the diagonal matrix of factorials and its inverse we can write
$$ V(x) cdot B =V(e^x) $$
By simple $LDU$-decomposition of $B$ we find
$$ begin{array}{}
&B = (f cdot S2 cdot F) cdot P \
V(x) cdot ( (f cdot S2 cdot F) cdot P) &= V(e^x) \
end{array} $$

What I now found (by numerical tests inspired by some hypothese) was that a pair of suitable powers of $P$ would triangularize $B$:
$$ begin{array}{}
B &= P^{-t} cdot U_t cdot P^{t} \
end{array} $$

where $t$ is the (complex) fixpoint of $exp()$ and $U_t$ was lower triangular. Let's write $u$ for $u=log(t)$ and $;^dV(cdot)$ for the diagonally written $V(cdot)$. Then this is simply
$$ U_t = ;^dV(u) cdot fS2F $$
This can be diagonalized (but with complex coefficients if $t$ or more precisely $u (=log(t))$ is complex as in our current example)
$$ begin{array}{}
U_t &= W_t cdot ;^dV(u) cdot W_t^{-1} \
end{array} $$

So all in all we could write
$$ begin{array}{}
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) cdot P^{t}) &= V(e^x) \
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) cdot P^{t}) &= V(exp°^h(x)) \ text{ and conveniantly } \
(V(x) cdot P^{-t}) cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) &= V(exp°^h(x))cdot P^{-t} \ end{array} $$

Ironically, by the functionality of $P$ and its powers $P^{-t}$ we have $V(x) cdot P^{-t} = V(x-t) $ and our formula reduces by further cosmetic to
$$ begin{array}{}
V(x-t) cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) &= V(e^x-t) \
end{array} $$

and one recognizes the mechanism of shift of the powerseries towards its fixpoint $t$! It needed not much time to recognize that $W_t$ is just the Carlemanmatrix for the Schröder-function, having complex coefficients (in our current example), thus this all gives complex resulting values for fractional real heights on real startingvalues $x$.




This all is simpler (and has only real numbers) if the fixpoint $t$ is real, and so the base $b$ is $1 lt b lt e^{1/e}$ thus $1 lt t lt e$ and $ 0 lt u (=log(t)) lt 1$. For instance, if $b=sqrt{2}$ then $t=2$ and $u=log(2) approx 0.693$ then also $W_t$ has only real coefficients.
Moreover, all coefficients in $U_t = W_t cdot ;^dV(u^h)cdot W_t^{-1}$ (where $h$ is the iteration-height, possibly fractional) are real and are descibable in terms of polynomials in $u$ and $h$, with the order depending of the index in the powerseries (see text, see index-of-pages)

Additional remark: W. Reshetnikow in mathoverflow presented this year(don't have the link at hand) an ansatz which avoided the Carleman-matrix-method introducing essentially the $q$-factorials and -binomials. This made it really good looking! However, by my own earlier analyses I had found out that this leads to the same polynomials in $u$ and $h$ as I had found them via the just described analysis of the infinite Carlemanmatrices.

So the analytical access via infinite Carlemanmatrices and their diagonalization leads naturally to the Schröder-function and the Schröder-mechanism (including shift towards fixpoints) - which is, what you did not want...






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:48










  • $begingroup$
    @The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:51










  • $begingroup$
    It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:52










  • $begingroup$
    Though now I'm not sure...
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:53










  • $begingroup$
    Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:57
















1












1








1





$begingroup$

Just a short answer what I've found concerning to express analytically the matrix-diagonalization for $C[exp]$ which I called $B$ for $exp(x)$ with base $b=e$ and $B_b$ for the general $b^x$ .

Using the notation $V(x)=[1,x,x^2,x^3,...]$ for the infinite "Vandermonde" row-vector of consecutive powers of $x$, $P$ for the upper-triangular Pascalmatrix, $S2$ for the lower-triangular matrix of Stirling-numbers 2'nd kind, $F$ and $f=F^{-1}$ for the diagonal matrix of factorials and its inverse we can write
$$ V(x) cdot B =V(e^x) $$
By simple $LDU$-decomposition of $B$ we find
$$ begin{array}{}
&B = (f cdot S2 cdot F) cdot P \
V(x) cdot ( (f cdot S2 cdot F) cdot P) &= V(e^x) \
end{array} $$

What I now found (by numerical tests inspired by some hypothese) was that a pair of suitable powers of $P$ would triangularize $B$:
$$ begin{array}{}
B &= P^{-t} cdot U_t cdot P^{t} \
end{array} $$

where $t$ is the (complex) fixpoint of $exp()$ and $U_t$ was lower triangular. Let's write $u$ for $u=log(t)$ and $;^dV(cdot)$ for the diagonally written $V(cdot)$. Then this is simply
$$ U_t = ;^dV(u) cdot fS2F $$
This can be diagonalized (but with complex coefficients if $t$ or more precisely $u (=log(t))$ is complex as in our current example)
$$ begin{array}{}
U_t &= W_t cdot ;^dV(u) cdot W_t^{-1} \
end{array} $$

So all in all we could write
$$ begin{array}{}
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) cdot P^{t}) &= V(e^x) \
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) cdot P^{t}) &= V(exp°^h(x)) \ text{ and conveniantly } \
(V(x) cdot P^{-t}) cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) &= V(exp°^h(x))cdot P^{-t} \ end{array} $$

Ironically, by the functionality of $P$ and its powers $P^{-t}$ we have $V(x) cdot P^{-t} = V(x-t) $ and our formula reduces by further cosmetic to
$$ begin{array}{}
V(x-t) cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) &= V(e^x-t) \
end{array} $$

and one recognizes the mechanism of shift of the powerseries towards its fixpoint $t$! It needed not much time to recognize that $W_t$ is just the Carlemanmatrix for the Schröder-function, having complex coefficients (in our current example), thus this all gives complex resulting values for fractional real heights on real startingvalues $x$.




This all is simpler (and has only real numbers) if the fixpoint $t$ is real, and so the base $b$ is $1 lt b lt e^{1/e}$ thus $1 lt t lt e$ and $ 0 lt u (=log(t)) lt 1$. For instance, if $b=sqrt{2}$ then $t=2$ and $u=log(2) approx 0.693$ then also $W_t$ has only real coefficients.
Moreover, all coefficients in $U_t = W_t cdot ;^dV(u^h)cdot W_t^{-1}$ (where $h$ is the iteration-height, possibly fractional) are real and are descibable in terms of polynomials in $u$ and $h$, with the order depending of the index in the powerseries (see text, see index-of-pages)

Additional remark: W. Reshetnikow in mathoverflow presented this year(don't have the link at hand) an ansatz which avoided the Carleman-matrix-method introducing essentially the $q$-factorials and -binomials. This made it really good looking! However, by my own earlier analyses I had found out that this leads to the same polynomials in $u$ and $h$ as I had found them via the just described analysis of the infinite Carlemanmatrices.

So the analytical access via infinite Carlemanmatrices and their diagonalization leads naturally to the Schröder-function and the Schröder-mechanism (including shift towards fixpoints) - which is, what you did not want...






share|cite|improve this answer









$endgroup$



Just a short answer what I've found concerning to express analytically the matrix-diagonalization for $C[exp]$ which I called $B$ for $exp(x)$ with base $b=e$ and $B_b$ for the general $b^x$ .

Using the notation $V(x)=[1,x,x^2,x^3,...]$ for the infinite "Vandermonde" row-vector of consecutive powers of $x$, $P$ for the upper-triangular Pascalmatrix, $S2$ for the lower-triangular matrix of Stirling-numbers 2'nd kind, $F$ and $f=F^{-1}$ for the diagonal matrix of factorials and its inverse we can write
$$ V(x) cdot B =V(e^x) $$
By simple $LDU$-decomposition of $B$ we find
$$ begin{array}{}
&B = (f cdot S2 cdot F) cdot P \
V(x) cdot ( (f cdot S2 cdot F) cdot P) &= V(e^x) \
end{array} $$

What I now found (by numerical tests inspired by some hypothese) was that a pair of suitable powers of $P$ would triangularize $B$:
$$ begin{array}{}
B &= P^{-t} cdot U_t cdot P^{t} \
end{array} $$

where $t$ is the (complex) fixpoint of $exp()$ and $U_t$ was lower triangular. Let's write $u$ for $u=log(t)$ and $;^dV(cdot)$ for the diagonally written $V(cdot)$. Then this is simply
$$ U_t = ;^dV(u) cdot fS2F $$
This can be diagonalized (but with complex coefficients if $t$ or more precisely $u (=log(t))$ is complex as in our current example)
$$ begin{array}{}
U_t &= W_t cdot ;^dV(u) cdot W_t^{-1} \
end{array} $$

So all in all we could write
$$ begin{array}{}
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) cdot P^{t}) &= V(e^x) \
V(x) cdot ( P^{-t} cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) cdot P^{t}) &= V(exp°^h(x)) \ text{ and conveniantly } \
(V(x) cdot P^{-t}) cdot (W_t cdot ;^dV(u^h) cdot W_t^{-1} ) &= V(exp°^h(x))cdot P^{-t} \ end{array} $$

Ironically, by the functionality of $P$ and its powers $P^{-t}$ we have $V(x) cdot P^{-t} = V(x-t) $ and our formula reduces by further cosmetic to
$$ begin{array}{}
V(x-t) cdot (W_t cdot ;^dV(u) cdot W_t^{-1} ) &= V(e^x-t) \
end{array} $$

and one recognizes the mechanism of shift of the powerseries towards its fixpoint $t$! It needed not much time to recognize that $W_t$ is just the Carlemanmatrix for the Schröder-function, having complex coefficients (in our current example), thus this all gives complex resulting values for fractional real heights on real startingvalues $x$.




This all is simpler (and has only real numbers) if the fixpoint $t$ is real, and so the base $b$ is $1 lt b lt e^{1/e}$ thus $1 lt t lt e$ and $ 0 lt u (=log(t)) lt 1$. For instance, if $b=sqrt{2}$ then $t=2$ and $u=log(2) approx 0.693$ then also $W_t$ has only real coefficients.
Moreover, all coefficients in $U_t = W_t cdot ;^dV(u^h)cdot W_t^{-1}$ (where $h$ is the iteration-height, possibly fractional) are real and are descibable in terms of polynomials in $u$ and $h$, with the order depending of the index in the powerseries (see text, see index-of-pages)

Additional remark: W. Reshetnikow in mathoverflow presented this year(don't have the link at hand) an ansatz which avoided the Carleman-matrix-method introducing essentially the $q$-factorials and -binomials. This made it really good looking! However, by my own earlier analyses I had found out that this leads to the same polynomials in $u$ and $h$ as I had found them via the just described analysis of the infinite Carlemanmatrices.

So the analytical access via infinite Carlemanmatrices and their diagonalization leads naturally to the Schröder-function and the Schröder-mechanism (including shift towards fixpoints) - which is, what you did not want...







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 22 '18 at 12:40









Gottfried HelmsGottfried Helms

23.4k24599




23.4k24599












  • $begingroup$
    Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:48










  • $begingroup$
    @The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:51










  • $begingroup$
    It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:52










  • $begingroup$
    Though now I'm not sure...
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:53










  • $begingroup$
    Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:57




















  • $begingroup$
    Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:48










  • $begingroup$
    @The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:51










  • $begingroup$
    It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:52










  • $begingroup$
    Though now I'm not sure...
    $endgroup$
    – The_Sympathizer
    Dec 22 '18 at 12:53










  • $begingroup$
    Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
    $endgroup$
    – Gottfried Helms
    Dec 22 '18 at 12:57


















$begingroup$
Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:48




$begingroup$
Hmm. This is interesting, as it suggests that the matrixpower is not unique, as thought. What you really want is then a matrixpower which ends up as somehow the limit of the finite matrixpowers for truncated matrix.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:48












$begingroup$
@The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
$endgroup$
– Gottfried Helms
Dec 22 '18 at 12:51




$begingroup$
@The_Sympathizer : hmm, only to avoid a possible misunderstanding: what/where do you mean that refers to "matrix-power is not unique"?
$endgroup$
– Gottfried Helms
Dec 22 '18 at 12:51












$begingroup$
It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:52




$begingroup$
It seems (and I might have read it wrong) that you constructed a sort of matrixpower which causes the Carleman matrix to power like the Schroder equation solution.
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:52












$begingroup$
Though now I'm not sure...
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:53




$begingroup$
Though now I'm not sure...
$endgroup$
– The_Sympathizer
Dec 22 '18 at 12:53












$begingroup$
Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
$endgroup$
– Gottfried Helms
Dec 22 '18 at 12:57






$begingroup$
Yes, it "implements" exactly the Schröder-mechanism, and so the fractional powers of matrix and by this the fractional iterates of function. Note that the eigenmatrix $W$ can in general be understood as (formal) infinite power of the basic matrix-operator for the function and this reflects then as well the term of the infinite iterated basic-function in the Schröder-mechanism. So $W_t$ is the Carlemanmatrix of the Schröder-function $sigma()$
$endgroup$
– Gottfried Helms
Dec 22 '18 at 12:57




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049164%2fcan-we-obtain-an-explicit-and-efficient-analytic-interpolation-of-tetration-by-t%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Ellipse (mathématiques)

Quarter-circle Tiles

Mont Emei