We study the problem of reconstructing a high-dimensional signal $\mathrm {x} \in \mathbb {R}^{n}$ from a low-dimensional noisy linear measurement $\mathrm {y}=\mathrm {M}\mathrm {x}+\mathrm {e} \in \mathbb {R}^{\ell }$ , assuming x admits a certain structure. We model the measurement matrix as M = BA, with arbitrary $\mathrm {B} \in \mathbb {R}^{\ell \times m}$ and sub-gaussian $\mathrm {A} \in \mathbb {R}^{m \times n}$ ; therefore allowing for a family of random measurement matrices which may have heavy tails, dependent rows and columns, and a large dynamic range for the singular values. The structure is either given as a non-convex cone $T \subset \mathbb {R}^{n}$ , or is induced via minimizing a given convex function $f(\cdot)$ , hence our study is sparsity-free. We prove, in both cases, that an approximate empirical risk minimizer robustly recovers the signal if the effective number of measurements is sufficient, even in the presence of a model mismatch, i.e., the signal not exactly admitting the model’s structure. While in classical compressed sensing the number of independent (sub)-gaussian measurements regulates the possibility of a robust reconstruction, in our setting the effective number of measurements depends on the properties of B. We show that, in this model, the stable rank of B indicates the effective number of measurements, and an accurate recovery is guaranteed whenever it exceeds, to within a constant factor, the effective dimension of the structure set. We apply our results to the special case of generative priors, i.e., when x is close to the range of a Generative Neural Network (GNN) with ReLU activation functions. Also, if the GNN has random weights in the last layer, our theory allows a partial Fourier measurement matrix, thus taking the first step towards a theoretical analysis of compressed sensing MRI with GNN. Our work relies on a recent result in random matrix theory by Jeong et al. (2020).