I am trying to understand how to produce (deterministically) a lattice $L subset mathbb R^n$ and a "bad basis" for $L$, in any given dimension, say $n=20$.
By "bad basis" $(b_1, …, b_n)$, I mean that when we apply the LLL-algorithm to it (say for $delta = 1$) then we get a basis $(v_1, …, v_n)$ such that $| v_1 |$ is "as close as possible" from the upper bound $(delta – 1/4)^{-(n-1)/2} lambda_1(L)$.
I am interested in explicit examples where one could describe the $b_i$‘s easily (or define explicitly a unimodular matrix $U$ that would give the bad basis $B$ from a "good basis" $G$, which could be used to compute $lambda_1(L)$). I have seen that one can sometimes use $B = HNF(G)$, but it might not always work to get a bad basis, and therefore it is not explicit (nor very clear to me…).