A pesar de que no tiene sentido buscar una probabilidad (a menos que usted especifique una distribución previa de los extremos), usted puede encontrar la probabilidad relativa. Una buena base para la comparación sería la hipótesis alternativa de que los números son dibujados a partir de una distribución uniforme entre un límite inferior $L$ y el límite superior $U$.
Las estadísticas suficientes son los mínimos $X$ y un máximo de $Y$ de todos los datos (suponiendo que cada número se obtiene de forma independiente). No importa si usted dibuja los datos en lotes o no. Cuando se dibuja desde el intervalo de $[0,1]$, la distribución conjunta de $(X, Y)$ es continua y tiene una densidad de
$$\eqalign{f(x,y) &= \binom{n}{1,n-2,1}(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1) \\ &= n(n-1)(y-x)^{n-2}\mathcal{I}(0\le x\le y\le 1).}$$
When scaled by $U-L$ and shifted by $L$, this density becomes
$$f_{(L,U)}(x,y) = (U-L)^{-n} n(n-1)(y-x)^{n-2}\mathcal{I}(L\le x\le y\le U).$$
Obviously this is greatest when $L = x$ and $U=y$.
The relative likelihood is their ratio, best expressed as a logarithm:
$$\Lambda(X,Y) = \log\left(\frac{f_{(X,Y)}(X,Y)}{f(X,Y)}\right) = -n\log(Y-X).$$
A small value of this is evidence for the hypothesis $(L,U)=(0,1)$; larger values are evidence against it. Of course if $X \lt 0$ or $S \gt 1$ the hypothesis is controverted. But when the hypothesis is true, for large $n$ (greater than $20$ or so), $2\Lambda(X,Y)$ will have approximately a $\chi^2(4)$ distribution. Assuming $X \ge 0$ and $S \le 1$, this enables you to reject the hypothesis when the chance of a $\chi^2(4)$ variable exceeding $2\Lambda(X,Y)$ becomes so small you can no longer suppose the large value can be attributed to chance alone.
I will not attempt to prove that the $\chi^2(4)$ distribution is the one to use; I will merely show that it works by simulating a large number of independent values of $2\Lambda(X,Y)$ when the hypothesis is true. Since you have the ability to generate large values of $n$, let's take $n=500$ as an example.
$100,000$ results are shown for $n=500$. The red curve graphs the density of a $\chi^2(4)$ variable. It closely agrees with the histogram.
As a worked example consider the situation posed in the question where $n=100$, $X= 0.51$, and $Y=0.69$. Now
$$-2\Lambda(0.51, 0.69) = -2(100\log(0.69 - 0.51)) = 343.$$
The corresponding $\chi^2(4)$ probability is less than $10^{-72}$: although we would never trust the accuracy of the $\chi^2$ approximation this far out into the tail (even with $n=100$ observations), this value is so small that certainly these data were not obtained from $100$ independent uniform$(0,1)$ variables!
In the second situation where $X=0.01$ and $Y=0.99$,
$$-2\Lambda(0.01, 0.99) = -2(100\log(0.99 - 0.01)) = 4.04.$$
Now the $\chi^2(4)$ probability is $0.40 = 40\$, quite consistent with the hypothesis that $%(L,U)=(0,1)$.
BTW, here's R
code to perform simulations. I have reset it to just $10,000$ iteraciones para que tarde menos de un segundo para completar.
n <- 500 # Sample size
N <- 1e4 # Number of simulation trials
lambda <- apply(matrix(runif(n*N), nrow=n), 2, function(x) -2 * n * log(diff(range(x))))
#
# Plot the results.
#
hist(lambda, freq=FALSE, breaks=seq(0, ceiling(max(lambda)), 1/4), border="#00000040",
main="Histogram", xlab="2*Lambda")
curve(dchisq(x, 4), add=TRUE, col="Red", lwd=2)