This paper presents our work regarding the comparison of the behaviour of some of the major TCP variants in a LTE network. Using the LENA LTE simulator and the DCE framework, the behaviour of the Linux implementations of seven TCP variants are compared. The evolution of the throughput, congestion window and queuing delay are studied for four scenarios with different network loads and flow types. Our measurements show that, in the situations we consider, most variants are able to quickly reach full link utilisation. However, to reach the same throughput, they create different amounts of queuing delay. On the one hand, loss based algorithms tend to totally fill the queue, creating huge queuing delays and inducing packet losses. On the other hand, delay based variants manage to limit the queue size and decrease the amount of packet dropped by the eNodeB but struggle to reach the maximum throughput in some circumstances.