`L i> >ddlZddlmZddlmZddlZddlmZddl m Z m Z dZ e e eddd ge eddd ge eddd gd d dddZ e e eddd ge eddd ge eddd dgdd dddZddddZy)N)islice)Integral) get_config)Intervalvalidate_paramsc#HK tt||}|r|nyw)zzChunk generator, ``gen`` into lists of length ``chunksize``. The last chunk may have a length less than ``chunksize``.N)listr)gen chunksizechunks ]/mnt/ssd/data/python-lab/Trading/venv/lib/python3.12/site-packages/sklearn/utils/_chunking.pychunk_generatorrs, VC+, K  s "left)closed)n batch_sizemin_batch_sizeT)prefer_skip_nested_validation)rc#Kd}tt||zD] }||z}||z|kDrt|||}"||krt||yyw)a,Generator to create slices containing `batch_size` elements from 0 to `n`. The last slice may contain less than `batch_size` elements, when `batch_size` does not divide `n`. Parameters ---------- n : int Size of the sequence. batch_size : int Number of elements in each batch. min_batch_size : int, default=0 Minimum number of elements in each batch. Yields ------ slice of `batch_size` elements See Also -------- gen_even_slices: Generator to create n_packs slices going up to n. Examples -------- >>> from sklearn.utils import gen_batches >>> list(gen_batches(7, 3)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(6, 3)) [slice(0, 3, None), slice(3, 6, None)] >>> list(gen_batches(2, 3)) [slice(0, 2, None)] >>> list(gen_batches(7, 3, min_batch_size=0)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(7, 3, min_batch_size=2)) [slice(0, 3, None), slice(3, 7, None)] rN)rangeintslice)rrrstart_ends r gen_batchesrsqZ E 3qJ' (j   ! # E3   qyE1osAA)rn_packs n_samples)r c#Kd}t|D]>}||z}|||zkr|dz }|dkDs||z}| t||}t||d|}@yw)aGenerator to create `n_packs` evenly spaced slices going up to `n`. If `n_packs` does not divide `n`, except for the first `n % n_packs` slices, remaining slices may contain fewer elements. Parameters ---------- n : int Size of the sequence. n_packs : int Number of slices to generate. n_samples : int, default=None Number of samples. Pass `n_samples` when the slices are to be used for sparse matrix indexing; slicing off-the-end raises an exception, while it works for NumPy arrays. Yields ------ `slice` representing a set of indices from 0 to n. See Also -------- gen_batches: Generator to create slices containing batch_size elements from 0 to n. Examples -------- >>> from sklearn.utils import gen_even_slices >>> list(gen_even_slices(10, 1)) [slice(0, 10, None)] >>> list(gen_even_slices(10, 10)) [slice(0, 1, None), slice(1, 2, None), ..., slice(9, 10, None)] >>> list(gen_even_slices(10, 5)) [slice(0, 2, None), slice(2, 4, None), ..., slice(8, 10, None)] >>> list(gen_even_slices(10, 3)) [slice(0, 4, None), slice(4, 7, None), slice(7, 10, None)] rrN)rminr)rrr rpack_numthis_nrs rgen_even_slicesr%Qsu\ E'N g a'k ! aKF A:&.C$)S)sD) )E s (A'A) max_n_rowsworking_memoryc| td}t|dz|z}| t||}|dkr2tjd|t j |dzfzd}|S)aCalculate how many rows can be processed within `working_memory`. Parameters ---------- row_bytes : int The expected number of bytes of memory that will be consumed during the processing of each row. max_n_rows : int, default=None The maximum return value. working_memory : int or float, default=None The number of rows to fit inside this number of MiB will be returned. When None (default), the value of ``sklearn.get_config()['working_memory']`` is used. Returns ------- int The number of rows which can be processed within `working_memory`. Warns ----- Issues a UserWarning if `row_bytes exceeds `working_memory` MiB. r'irzOCould not adhere to working_memory config. Currently %.0fMiB, %.0fMiB required.g>)rrr"warningswarnnpceil) row_bytesr&r' chunk_n_rowss rget_chunk_n_rowsr/s~2#&67~/9<=L<4 a  3rwwy6'9:; <  )r) itertoolsrnumbersrnumpyr+_configr_param_validationrr rrr%r/r0rr7s 8xD8 9!T&AB#HafEF #' 23--`xD8 9Xq$v>?xD@$G #' .200f/34&r0