GVKun编程网logo

Python numpy 模块-array_split() 实例源码(python中numpy模块)

1

如果您想了解Pythonnumpy模块-array_split()实例源码和python中numpy模块的知识,那么本篇文章将是您的不二之选。我们将深入剖析Pythonnumpy模块-array_sp

如果您想了解Python numpy 模块-array_split() 实例源码python中numpy模块的知识,那么本篇文章将是您的不二之选。我们将深入剖析Python numpy 模块-array_split() 实例源码的各个方面,并为您解答python中numpy模块的疑在这篇文章中,我们将为您介绍Python numpy 模块-array_split() 实例源码的相关知识,同时也会详细的解释python中numpy模块的运用方法,并给出实际的案例分析,希望能帮助到您!

本文目录一览:

Python numpy 模块-array_split() 实例源码(python中numpy模块)

Python numpy 模块-array_split() 实例源码(python中numpy模块)

Python numpy 模块,array_split() 实例源码

我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.array_split()

项目:GOS    作者:crcresearch    | 项目源码 | 文件源码
  1. def create_agents(self, generator):
  2. """
  3. Given information on a set of countries and a generator function,
  4. generate the agents and assign the results to ``self.agents``.
  5.  
  6. :type generator: DataFrame,str,int
  7. :param generator: A function which generates the agents.
  8. """
  9. self.generator = generator
  10. country_array = pd.concat([pd.Series([c] * k["Population"]) for c, k in self.df.iterrows()])
  11. country_array.index = range(len(country_array))
  12. # Garbage collect before creating new processes.
  13. gc.collect()
  14. self.agents = pd.concat(
  15. self.pool.imap(self._gen_agents,
  16. np.array_split(country_array, self.processes * self.splits))
  17. )
  18. self.agents.index = range(len(self.agents))
项目:GOS    作者:crcresearch    | 项目源码 | 文件源码
  1. def create_agents(self, self.processes * self.splits))
  2. )
  3. self.agents.index = range(len(self.agents))
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def test_latlon2pix_internals(pix_size_single, origin_point, is_flipped,
  2. num_chunks, chunk_position):
  3.  
  4. img = make_image(pix_size_single,
  5. num_chunks, chunk_position)
  6. chunk_idx = img.chunk_idx
  7. res_x = img._full_res[0]
  8. res_y = img._full_res[1]
  9. pix_size = (img.pixsize_x, img.pixsize_y)
  10. origin = (img._start_lon, img._start_lat)
  11.  
  12. # +0.5 for centre of pixels
  13. lons = (np.arange(res_x) + 0.5) * pix_size[0] + origin[0]
  14. all_lats = (np.arange(res_y) + 0.5) * pix_size[1] + origin[1]
  15. lats = np.array_split(all_lats, num_chunks)[chunk_idx]
  16.  
  17. pix_x = np.arange(res_x)
  18. pix_y = np.arange(lats.shape[0])
  19.  
  20. d = np.array([[a, b] for a in lons for b in lats])
  21. xy = img.lonlat2pix(d)
  22. true_xy = np.array([[a, b] for a in pix_x for b in pix_y])
  23. assert np.all(xy == true_xy)
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def test_pix2latlong(pix_size_single, img._start_lat)
  2.  
  3. true_lons = np.arange(res_x) * pix_size[0] + origin[0]
  4. all_lats = np.arange(res_y) * pix_size[1] + origin[1]
  5. true_lats = np.array_split(all_lats, num_chunks)[chunk_idx]
  6. true_d = np.array([[a, b] for a in true_lons for b in true_lats])
  7.  
  8. pix_x = np.arange(res_x)
  9. pix_y = np.arange(img.resolution[1]) # chunk resolution
  10.  
  11. xy = np.array([[a, b] for a in pix_x for b in pix_y])
  12.  
  13. lonlats = img.pix2lonlat(xy)
  14. assert np.all(lonlats == true_d)
项目:motif-classify    作者:macks22    | 项目源码 | 文件源码
  1. def transform(self, X):
  2. if self.tagger is None:
  3. raise ValueError("Must find_motifs before you can tag anything")
  4.  
  5. logging.info("Tagging %s data with motifs using %d workers..." % (
  6. str(X.shape), self.n_jobs))
  7.  
  8. if self.n_jobs > 1:
  9. pool = mp.ProcessingPool(self.n_jobs)
  10. splits = np.array_split(X, self.n_jobs)
  11. tag_lists = pool.map(self._tag_motifs, splits)
  12. tags = list(itertools.chain.from_iterable(tag_lists))
  13. else:
  14. tags = self._tag_motifs(X)
  15.  
  16. logging.info("All motifs have been tagged")
  17. return self._sparsify_tags(tags)
项目:word2vec_pipeline    作者:NIHOPA    | 项目源码 | 文件源码
  1. def subset_iterator(X, m, repeats=1):
  2. ''''''
  3. Iterates over array X in chunks of m,repeat number of times.
  4. Each time the order of the repeat is randomly generated.
  5. ''''''
  6.  
  7. N, dim = X.shape
  8. progress = tqdm(total=repeats * int(N / m))
  9.  
  10. for i in range(repeats):
  11.  
  12. indices = np.random.permutation(N)
  13.  
  14. for idx in np.array_split(indices, N // m):
  15. yield X[idx][:]
  16. progress.update()
  17.  
  18. progress.close()
项目:painters    作者:inejc    | 项目源码 | 文件源码
  1. def _split_into_groups(y, num_groups):
  2. groups = [[] for _ in range(num_groups)]
  3. group_index = 0
  4.  
  5. for cls in set(y):
  6. this_cls_indices = np.where(y == cls)[0]
  7. num_cls_samples = len(this_cls_indices)
  8.  
  9. num_cls_split_groups = ceil(num_cls_samples / 500)
  10. split = np.array_split(this_cls_indices, num_cls_split_groups)
  11.  
  12. for cls_group in split:
  13. groups[group_index] = np.hstack((groups[group_index], cls_group))
  14. group_index = (group_index + 1) % num_groups
  15.  
  16. return groups
项目:dvd    作者:ajayrfhp    | 项目源码 | 文件源码
  1. def get_embedding_X(img):
  2. ''''''
  3. Args : Numpy Images vector
  4. Returns : Embedded Matrix of length Samples,4096
  5. ''''''
  6. img = img.reshape((img.shape[0], img.shape[1], img.shape[2], 1))
  7. sess = tf.Session()
  8. imgs = tf.placeholder(tf.float32, [None, None, None])
  9. vgg = vgg16(imgs, ''/tmp/vgg16_weights.npz'', sess)
  10. embs = []
  11. cnt = 0
  12. for img_batch in np.array_split(img, img.shape[0] / 1000):
  13. emb = sess.run(vgg.emb, Feed_dict={vgg.imgs: img_batch})
  14. embs.extend(emb)
  15. cnt += 1
  16. progress = round(100 * (cnt * 1000 / img.shape[0]),2)
  17. if(progress%10 == 0):
  18. print progress
  19. embs = np.array(embs)
  20. print embs.shape
  21. embs = np.reshape(embs,(embs.shape[0],embs.shape[1] * embs.shape[2] * embs.shape[3]))
  22. return embs
项目:yt    作者:yt-project    | 项目源码 | 文件源码
  1. def __init__(self, pobj, just_list = False, attr=''_grids'',
  2. round_robin=False):
  3. ObjectIterator.__init__(self, just_list, attr=attr)
  4. # pobj has to be a ParallelAnalysisInterface,so it must have a .comm
  5. # object.
  6. self._offset = pobj.comm.rank
  7. self._skip = pobj.comm.size
  8. # Note that we''re doing this in advance,and with a simple means
  9. # of choosing them; more advanced methods will be explored later.
  10. if self._use_all:
  11. self.my_obj_ids = np.arange(len(self._objs))
  12. else:
  13. if not round_robin:
  14. self.my_obj_ids = np.array_split(
  15. np.arange(len(self._objs)), self._skip)[self._offset]
  16. else:
  17. self.my_obj_ids = np.arange(len(self._objs))[self._offset::self._skip]
项目:deep_metric_learning    作者:ronekko    | 项目源码 | 文件源码
  1. def iter_combinatorial_pairs(queue, num_examples, batch_size, interval,
  2. num_classes, augment_positive=False):
  3. num_examples_per_class = num_examples // num_classes
  4. pairs = np.array(list(itertools.combinations(range(num_examples), 2)))
  5.  
  6. if augment_positive:
  7. additional_positive_pairs = make_positive_pairs(
  8. num_classes, num_examples_per_class, num_classes - 1)
  9. pairs = np.concatenate((pairs, additional_positive_pairs))
  10.  
  11. num_pairs = len(pairs)
  12. num_batches = num_pairs // batch_size
  13. perm = np.random.permutation(num_pairs)
  14. for i, batch_indexes in enumerate(np.array_split(perm, num_batches)):
  15. if i % interval == 0:
  16. x, c = queue.get()
  17. x = x.astype(np.float32) / 255.0
  18. c = c.ravel()
  19. indexes0, indexes1 = pairs[batch_indexes].T
  20. x0, x1, c0, c1 = x[indexes0], x[indexes1], c[indexes0], c[indexes1]
  21. t = np.int32(c0 == c1) # 1 if x0 and x1 are same class,0 otherwise
  22. yield x0, t
项目:deep_metric_learning    作者:ronekko    | 项目源码 | 文件源码
  1. def get_epoch_indexes(self):
  2. B = self.batch_size
  3. K = self.num_classes
  4. M = self.num_per_class
  5. N = K * M # number of total examples
  6. num_batches = M * int(K // B) # number of batches per epoch
  7.  
  8. indexes = np.arange(N, dtype=np.int32).reshape(K, M)
  9. epoch_indexes = []
  10. for m in range(M):
  11. perm = np.random.permutation(K)
  12. c_batches = np.array_split(perm, num_batches // M)
  13. for c_batch in c_batches:
  14. b = len(c_batch) # actual number of examples of this batch
  15. indexes_anchor = M * c_batch + m
  16.  
  17. positive_candidates = np.delete(indexes[c_batch], axis=1)
  18. indexes_positive = positive_candidates[
  19. range(b), np.random.choice(M - 1, size=b)]
  20.  
  21. epoch_indexes.append((indexes_anchor, indexes_positive))
  22.  
  23. return epoch_indexes
项目:ESL-Model    作者:littlezz    | 项目源码 | 文件源码
  1. def pre_processing(self):
  2. """Provide same API as Model,we split data to K folds here.
  3. """
  4. if self.random:
  5. mask = np.random.permutation(self.train_x.shape[0])
  6. train_x = self.train_x[mask]
  7. train_y = self.train_y[mask]
  8. else:
  9. train_x = self.train_x[:]
  10. train_y = self.train_y[:]
  11.  
  12. if self.select_train_method == ''step'':
  13. self.x_folds = [train_x[i::self.k_folds] for i in range(0, self.k_folds)]
  14. self.y_folds = [train_y[i::self.k_folds] for i in range(0, self.k_folds)]
  15. else:
  16. self.x_folds = np.array_split(train_x, self.k_folds)
  17. self.y_folds = np.array_split(train_y, self.k_folds)
  18.  
  19.  
  20. # for i in range(self.k_folds):
  21. # self.x_folds[i] = self.train_x[0] + self.x_folds[i] + self.train_x[-1]
  22. # self.y_folds[i] = self.train_y[0] + self.y_folds[i] + self.train_y[-1]
项目:poeai    作者:nicholastoddsmith    | 项目源码 | 文件源码
  1. def Train(self, C, A, Y, SF):
  2. ''''''
  3. Train the classifier using the sample matrix A and target matrix Y
  4. ''''''
  5. C.fit(A, Y)
  6. YH = np.zeros(Y.shape, dtype = np.object)
  7. for i in np.array_split(np.arange(A.shape[0]), 32): #Split up verification into chunks to prevent out of memory
  8. YH[i] = C.predict(A[i])
  9. s1 = SF(Y, YH)
  10. print(''All:{:8.6f}''.format(s1))
  11. ''''''
  12. ss = ShuffleSplit(random_state = 1151) #Use fixed state for so training can be repeated later
  13. trn,tst = next(ss.split(A,Y)) #Make train/test split
  14. mi = [8] * 1 #Maximum number of iterations at each iter
  15. YH = np.zeros((A.shape[0]),dtype = np.object)
  16. for mic in mi: #Chunk size to split dataset for CV results
  17. #C.SetMaxIter(mic) #Set the maximum number of iterations to run
  18. #C.fit(A[trn],Y[trn]) #Perform training iterations
  19. ''''''
项目:wavelet-denoising    作者:mackaiver    | 项目源码 | 文件源码
  1. def add_point(self, t, alt, az):
  2.  
  3. self.window.append((t, az))
  4. if self._current_window_size() < self.window_duration:
  5. return
  6.  
  7. points = np.array(self.window)
  8. steady, current = np.array_split(points, 2)
  9.  
  10. _, steady_cube = self.create_cube(steady)
  11. timestamps, current_cube = self.create_cube(current)
  12.  
  13. t = self.denoise_and_compare_cubes(steady_cube, current_cube)
  14. self.trigger_criterion.append(list(t))
  15. self.trigger_criterion_timestamps.append(list(timestamps))
  16.  
  17. has_triggered = self.check_trigger(t)
  18. new_duration = self.window_duration - self.step
  19. self._reduce_to_duration(new_duration)
项目:job-salary-prediction    作者:soton-data-mining    | 项目源码 | 文件源码
  1. def predict(self):
  2. if os.path.exists(DATA_QUERIES_VECTOR_NPZ) and not FORCE_LOAD:
  3. print(''{}: loading precomputed data''.format(self.__class__.__name__))
  4. self.load_precomputed_data()
  5. else:
  6. self.precomputed_similarity()
  7.  
  8. batch_size = 100
  9. batch_elements = math.ceil(self.queries_vector.shape[0] / batch_size)
  10. batch_queue = np.array_split(self.queries_vector.A, batch_elements)
  11. print("starting batch computation of Similarity and KNN calculation")
  12.  
  13. # # multiple versions of calculating the prediction,some faster,some use more mem
  14.  
  15. # prediction = self.multiprocessor_batch_calc(batch_queue)
  16. prediction = self.batch_calculation(batch_queue)
  17. # prediction = self.individual_calculation()
  18. # prediction = self.cosine_knn_calc()
  19. # prediction = self.custom_knn_calculation(prediction)
  20.  
  21. train_avg_salary = sum(self.y_train) / len(self.y_train)
  22. cleaned_predictions = [x if str(x) != ''nan'' else train_avg_salary for x in prediction]
  23.  
  24. return self.y_train, cleaned_predictions
项目:deepsleepnet    作者:akaraspt    | 项目源码 | 文件源码
  1. def load_test_data(self):
  2. # Remove non-mat files,and perform ascending sort
  3. allfiles = os.listdir(self.data_dir)
  4. npzfiles = []
  5. for idx, f in enumerate(allfiles):
  6. if ".npz" in f:
  7. npzfiles.append(os.path.join(self.data_dir, f))
  8. npzfiles.sort()
  9.  
  10. # Files for validation sets
  11. val_files = np.array_split(npzfiles, self.n_folds)
  12. val_files = val_files[self.fold_idx]
  13.  
  14. print "\\n========== [Fold-{}] ==========\\n".format(self.fold_idx)
  15.  
  16. print "Load validation set:"
  17. data_val, label_val = self._load_npz_list_files(val_files)
  18.  
  19. return data_val, label_val
项目:preconditioned_GPs    作者:mauriziofilippone    | 项目源码 | 文件源码
  1. def __init__(self, X, kern, Xm):
  2.  
  3. super(PITC, self).__init__("PITC")
  4. M = np.shape(Xm)[0]
  5. self.M = M
  6.  
  7. start = time.time()
  8. X_split = np.array_split(X, M)
  9. self.kern = kern
  10. kern_blocks = np.zeros((M),dtype=object)
  11.  
  12. for t in xrange(M):
  13. nyst = Nystrom(X_split[t], Xm, False)
  14. size = np.shape(X_split[t])[0]
  15. kern_blocks[t] = kern.K(X_split[t], X_split[t]) - nyst.precon + (kern.noise)*np.identity(size)
  16.  
  17. self.blocks = kern_blocks
  18. blocked = block_diag(*kern_blocks)
  19.  
  20. self.nyst = Nystrom(X, False)
  21. self.precon = self.nyst.precon + blocked
  22. self.duration = time.time() - start
项目:preconditioned_GPs    作者:mauriziofilippone    | 项目源码 | 文件源码
  1. def __init__(self, False)
  2. self.precon = self.nyst.precon + blocked
  3. self.duration = time.time() - start
项目:chainer-pix2pix    作者:wuhuikai    | 项目源码 | 文件源码
  1. def _read_image_as_array(path, dtype, load_size, crop_size, flip):
  2. f = Image.open(path)
  3.  
  4. A, B = numpy.array_split(numpy.asarray(f), 2, axis=1)
  5. if hasattr(f, ''close''):
  6. f.close()
  7.  
  8. A = _resize(A, Image.BILINEAR, dtype)
  9. B = _resize(B, Image.NEAREST, dtype)
  10.  
  11. sx, sy = numpy.random.randint(0, load_size-crop_size, 2)
  12. A = _crop(A, sx, sy, crop_size)
  13. B = _crop(B, crop_size)
  14.  
  15. if flip and numpy.random.rand() > 0.5:
  16. A = numpy.fliplr(A)
  17. B = numpy.fliplr(B)
  18.  
  19. return A.transpose(2, 0, 1), B.transpose(2, 1)
项目:Waskom_PNAS_2017    作者:WagnerLabPapers    | 项目源码 | 文件源码
  1. def setup_figure():
  2.  
  3. f = plt.figure(figsize=(7, 5))
  4.  
  5. mat_grid = plt.GridSpec(2, 6, .07, .52, .98, .95, .15, .20)
  6. mat_axes = [f.add_subplot(spec) for spec in mat_grid]
  7. sticks_axes, rest_axes = np.array_split(mat_axes, 2)
  8.  
  9. scatter_grid = plt.GridSpec(1, .30, .49, .05)
  10. scatter_axes = [f.add_subplot(spec) for spec in scatter_grid]
  11.  
  12. kde_grid = plt.GridSpec(1, .21, .05)
  13. kde_axes = [f.add_subplot(spec) for spec in kde_grid]
  14.  
  15. cbar_ax = f.add_axes([.04, .62, .015, .26])
  16.  
  17. return f, sticks_axes, rest_axes, scatter_axes, kde_axes, cbar_ax
项目:open-syllabus-project    作者:davidmcclure    | 项目源码 | 文件源码
  1. def partitions(min_val, max_val, n):
  2.  
  3. """
  4. Get start/stop boundaries for N partitions.
  5.  
  6. Args:
  7. min_val (int): The starting value.
  8. max_val (int): The last value.
  9. n (int): The number of partitions.
  10. """
  11.  
  12. pts = np.array_split(np.arange(min_val, max_val+1), n)
  13.  
  14. bounds = []
  15. for pt in pts:
  16. bounds.append((int(pt[0]), int(pt[-1])))
  17.  
  18. return bounds
项目:decoding_challenge_cortana_2016_3rd    作者:kingjr    | 项目源码 | 文件源码
  1. def fit(self, y):
  2. """Fit a series of independent estimators to the dataset.
  3.  
  4. Parameters
  5. ----------
  6. X : array,shape (n_samples,n_features,n_estimators)
  7. The training input samples. For each data slice,a clone estimator
  8. is fitted independently.
  9. y : array,)
  10. The target values.
  11.  
  12. Returns
  13. -------
  14. self : object
  15. Return self.
  16. """
  17. self._check_Xy(X, y)
  18. self.estimators_ = list()
  19. # For fitting,the parallelization is across estimators.
  20. parallel, p_func, n_jobs = parallel_func(_sl_fit, self.n_jobs)
  21. estimators = parallel(
  22. p_func(self.base_estimator, split, y)
  23. for split in np.array_split(X, n_jobs, axis=-1))
  24. self.estimators_ = np.concatenate(estimators, 0)
  25. return self
项目:decoding_challenge_cortana_2016_3rd    作者:kingjr    | 项目源码 | 文件源码
  1. def _transform(self, method):
  2. """Aux. function to make parallel predictions/transformation."""
  3. self._check_Xy(X)
  4. method = _check_method(self.base_estimator, method)
  5. if X.shape[-1] != len(self.estimators_):
  6. raise ValueError(''The number of estimators does not match ''
  7. ''X.shape[2]'')
  8. # For predictions/transforms the parallelization is across the data and
  9. # not across the estimators to avoid memory load.
  10. parallel, n_jobs = parallel_func(_sl_transform, self.n_jobs)
  11. X_splits = np.array_split(X, axis=-1)
  12. est_splits = np.array_split(self.estimators_, n_jobs)
  13. y_pred = parallel(p_func(est, x, method)
  14. for (est, x) in zip(est_splits, X_splits))
  15.  
  16. if n_jobs > 1:
  17. y_pred = np.concatenate(y_pred, axis=1)
  18. else:
  19. y_pred = y_pred[0]
  20. return y_pred
项目:FootballPredictors    作者:NickSadler2018    | 项目源码 | 文件源码
  1. def _yield_minibatches_idx(self, n_batches, data_ary, shuffle=True):
  2. indices = np.arange(data_ary.shape[0])
  3. if shuffle:
  4. indices = np.random.permutation(indices)
  5. if n_batches > 1:
  6. remainder = data_ary.shape[0] % n_batches
  7.  
  8. if remainder:
  9. minis = np.array_split(indices[:-remainder], n_batches)
  10. minis[-1] = np.concatenate((minis[-1],
  11. indices[-remainder:]),
  12. axis=0)
  13. else:
  14. minis = np.array_split(indices, n_batches)
  15.  
  16. else:
  17. minis = (indices,)
  18.  
  19. for idx_batch in minis:
  20. yield idx_batch
项目:Parallel-SGD    作者:angadgill    | 项目源码 | 文件源码
  1. def test_mini_batch_k_means_random_init_partial_fit():
  2. km = MiniBatchKMeans(n_clusters=n_clusters, init="random", random_state=42)
  3.  
  4. # use the partial_fit API for online learning
  5. for X_minibatch in np.array_split(X, 10):
  6. km.partial_fit(X_minibatch)
  7.  
  8. # compute the labeling on the complete dataset
  9. labels = km.predict(X)
  10. assert_equal(v_measure_score(true_labels, labels), 1.0)
项目:crayimage    作者:yandexdataschool    | 项目源码 | 文件源码
  1. def binned_batch_stream(target_statistics, n_bins=64):
  2. hist, bins = np.histogram(target_statistics, bins=n_bins)
  3. indx = np.argsort(target_statistics)
  4. indicies_categories = np.array_split(indx, np.cumsum(hist)[:-1])
  5.  
  6. per_category = batch_size / n_bins
  7.  
  8. weight_correction = (np.float64(hist) / per_category).astype(''float32'')
  9. wc = np.repeat(weight_correction, per_category)
  10.  
  11. for i in xrange(n_batches):
  12. sample = [
  13. np.random.choice(ind, size=per_category, replace=True)
  14. for ind in indicies_categories
  15. ]
  16.  
  17. yield np.hstack(sample), wc
项目:crayimage    作者:yandexdataschool    | 项目源码 | 文件源码
  1. def binned_batch_stream(target_statistics, n_bins=64):
  2. hist, bins=n_bins)
  3. indx = np.argsort(target_statistics)
  4. indicies_categories = np.array_split(indx, np.cumsum(hist)[:-1])
  5. n_samples = target_statistics.shape[0]
  6.  
  7. per_category = batch_size / n_bins
  8.  
  9. weight_correction = (n_bins * np.float64(hist) / n_samples).astype(''float32'')
  10. wc = np.repeat(weight_correction, per_category)
  11.  
  12. for i in xrange(n_batches):
  13. sample = [
  14. np.random.choice(ind, replace=True)
  15. for ind in indicies_categories
  16. ]
  17.  
  18. yield np.hstack(sample), wc
项目:array_split    作者:array-split    | 项目源码 | 文件源码
  1. def test_shape_factors(self):
  2. """
  3. Tests for :func:`array_split.split.shape_factors`.
  4. """
  5. f = shape_factors(4, 2)
  6. self.assertTrue(_np.all(f == 2))
  7.  
  8. f = shape_factors(4, 1)
  9. self.assertTrue(_np.all(f == 4))
  10.  
  11. f = shape_factors(5, 2)
  12. self.assertTrue(_np.all(f == [1, 5]))
  13.  
  14. f = shape_factors(6, 2)
  15. self.assertTrue(_np.all(f == [2, 3]))
  16.  
  17. f = shape_factors(6, 3)
  18. self.assertTrue(_np.all(f == [1, 3]))
项目:tensorflow    作者:luyishisi    | 项目源码 | 文件源码
  1. def scale(Boxlist, y_scale, x_scale):
  2. """Scale Box coordinates in x and y dimensions.
  3.  
  4. Args:
  5. Boxlist: BoxList holding N Boxes
  6. y_scale: float
  7. x_scale: float
  8.  
  9. Returns:
  10. Boxlist: BoxList holding N Boxes
  11. """
  12. y_min, x_min, y_max, x_max = np.array_split(Boxlist.get(), 4, axis=1)
  13. y_min = y_scale * y_min
  14. y_max = y_scale * y_max
  15. x_min = x_scale * x_min
  16. x_max = x_scale * x_max
  17. scaled_Boxlist = np_Box_list.BoxList(np.hstack([y_min, x_max]))
  18.  
  19. fields = Boxlist.get_extra_fields()
  20. for field in fields:
  21. extra_field_data = Boxlist.get_field(field)
  22. scaled_Boxlist.add_field(field, extra_field_data)
  23.  
  24. return scaled_Boxlist
项目:distributional_perspective_on_RL    作者:Kiwoo    | 项目源码 | 文件源码
  1. def iterbatches(arrays, num_batches=None, batch_size=None, shuffle=True, include_final_partial_batch=True):
  2. assert (num_batches is None) != (batch_size is None), ''Provide num_batches or batch_size,but not both''
  3. arrays = tuple(map(np.asarray, arrays))
  4. n = arrays[0].shape[0]
  5. assert all(a.shape[0] == n for a in arrays[1:])
  6. inds = np.arange(n)
  7. if shuffle: np.random.shuffle(inds)
  8. sections = np.arange(0, n, batch_size)[1:] if num_batches is None else num_batches
  9. for batch_inds in np.array_split(inds, sections):
  10. if include_final_partial_batch or len(batch_inds) == batch_size:
  11. yield tuple(a[batch_inds] for a in arrays)
项目:det_k_bisbm    作者:junipertcy    | 项目源码 | 文件源码
  1. def _gen_init_n_blocks(na, nb, ka, kb):
  2. num_nodes_a = np.arange(na)
  3. n_blocks_a = map(len, np.array_split(num_nodes_a, ka))
  4. num_nodes_b = np.arange(nb)
  5. n_blocks_b = map(len, np.array_split(num_nodes_b, kb))
  6.  
  7. n_blocks_ = " ".join(map(str, n_blocks_a)) + " " + " ".join(map(str, n_blocks_b))
  8.  
  9. return n_blocks_
项目:det_k_bisbm    作者:junipertcy    | 项目源码 | 文件源码
  1. def gen_equal_partition(n, total):
  2. all_nodes = np.arange(total)
  3. n_blocks = list(map(len, np.array_split(all_nodes, n)))
  4.  
  5. return n_blocks
项目:GOS    作者:crcresearch    | 项目源码 | 文件源码
  1. def run_par(self, function, **kwargs):
  2. """
  3. Run a function on the agents in parallel.
  4. """
  5. columns = kwargs["columns"] if "columns" in kwargs else self.agents.columns
  6. # Garbage collect before creating new processes.
  7. gc.collect()
  8. return pd.concat(self.pool.imap(partial(function, **kwargs),
  9. np.array_split(self.agents[columns],
  10. self.processes * self.splits)))
项目:GOS    作者:crcresearch    | 项目源码 | 文件源码
  1. def run_par(self,
  2. self.processes * self.splits)))
项目:main_loop_tf    作者:fvisin    | 项目源码 | 文件源码
  1. def split_in_chunks(minibatch, num_splits, flatten_keys=[''labels'']):
  2. ''''''Return the splits per device
  3.  
  4. Return a list of dictionaries,one per device. Each dictionary
  5. contains,for each key,the values that should be allocated on its
  6. device.
  7. ''''''
  8. # Split the value of each key into chunks
  9. for k, v in minibatch.iteritems():
  10. minibatch[k] = np.array_split(v, num_splits)
  11. if any(k == v for v in flatten_keys):
  12. minibatch[k] = [el.flatten() for el in minibatch[k]]
  13. return map(dict, zip(*[[(k, v) for v in value]
  14. for k, value in minibatch.items()]))
项目:keras-molecules    作者:maxhodak    | 项目源码 | 文件源码
  1. def chunk_iterator(dataset, chunk_size=1000):
  2. chunk_indices = np.array_split(np.arange(len(dataset)),
  3. len(dataset)/chunk_size)
  4. for chunk_ixs in chunk_indices:
  5. chunk = dataset[chunk_ixs]
  6. yield (chunk_ixs, chunk)
  7. raise stopiteration
项目:cupy    作者:cupy    | 项目源码 | 文件源码
  1. def array_split(ary, indices_or_sections, axis=0):
  2. """Splits an array into multiple sub arrays along a given axis.
  3.  
  4. This function is almost equivalent to :func:`cupy.split`. The only
  5. difference is that this function allows an integer sections that does not
  6. evenly divide the axis.
  7.  
  8. .. seealso:: :func:`cupy.split` for more detail,:func:`numpy.array_split`
  9.  
  10. """
  11. return core.array_split(ary, axis)
项目:cupy    作者:cupy    | 项目源码 | 文件源码
  1. def split(ary, axis=0):
  2. """Splits an array into multiple sub arrays along a given axis.
  3.  
  4. Args:
  5. ary (cupy.ndarray): Array to split.
  6. indices_or_sections (int or sequence of ints): A value indicating how
  7. to divide the axis. If it is an integer,then is treated as the
  8. number of sections,and the axis is evenly divided. Otherwise,
  9. the integers indicate indices to split at. Note that the sequence
  10. on the device memory is not allowed.
  11. axis (int): Axis along which the array is split.
  12.  
  13. Returns:
  14. A list of sub arrays. Each array is a view of the corresponding input
  15. array.
  16.  
  17. .. seealso:: :func:`numpy.split`
  18.  
  19. """
  20. if ary.ndim <= axis:
  21. raise IndexError(''Axis exceeds ndim'')
  22. size = ary.shape[axis]
  23.  
  24. if numpy.isscalar(indices_or_sections):
  25. if size % indices_or_sections != 0:
  26. raise ValueError(
  27. ''indices_or_sections must divide the size along the axes.\\n''
  28. ''If you want to split the array into non-equally-sized ''
  29. ''arrays,use array_split instead.'')
  30. return array_split(ary, axis)
项目:baselines    作者:openai    | 项目源码 | 文件源码
  1. def iterbatches(arrays, *, sections):
  2. if include_final_partial_batch or len(batch_inds) == batch_size:
  3. yield tuple(a[batch_inds] for a in arrays)
项目:dataScryer    作者:Griesbacher    | 项目源码 | 文件源码
  1. def trim_data(data, resolution):
  2. r = []
  3. for i in numpy.array_split(data, resolution):
  4. if len(i) > 0:
  5. r.append(numpy.average(i))
  6. return r
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def test_latlon2pix_edges(pix_size_single,
  2. num_chunks, img._start_lat)
  3.  
  4. # compute chunks
  5. lons = np.arange(res_x + 1) * pix_size[0] + origin[0] # right edge +1
  6. all_lats = np.arange(res_y) * pix_size[1] + origin[1]
  7. lats_chunks = np.array_split(all_lats, num_chunks)[chunk_idx]
  8. pix_x = np.concatenate((np.arange(res_x), [res_x - 1]))
  9. pix_y_chunks = range(lats_chunks.shape[0])
  10. if chunk_position == ''end'':
  11. pix_y = np.concatenate((pix_y_chunks, [pix_y_chunks[-1]]))
  12. lats = np.concatenate((lats_chunks, [res_y * pix_size[1] + origin[1]]))
  13. else:
  14. pix_y = pix_y_chunks
  15. lats = lats_chunks
  16.  
  17. d = np.array([[a, b] for a in pix_x for b in pix_y])
  18. assert np.all(xy == true_xy)
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def split_cfold(nsamples, k=5, seed=None):
  2. """
  3. Function that returns indices for splitting data into random folds.
  4.  
  5. Parameters
  6. ----------
  7. nsamples: int
  8. the number of samples in the dataset
  9. k: int,optional
  10. the number of folds
  11. seed: int,optional
  12. random seed to provide to numpy
  13.  
  14. Returns
  15. -------
  16. cvinds: list
  17. list of arrays of length k,each with approximate shape (nsamples /
  18. k,) of indices. These indices are randomly permuted (without
  19. replacement) of assignments to each fold.
  20. cvassigns: ndarray
  21. array of shape (nsamples,) with each element in [0,k),that can be
  22. used to assign data to a fold. This corresponds to the indices of
  23. cvinds.
  24.  
  25. """
  26. np.random.seed(seed)
  27. pindeces = np.random.permutation(nsamples)
  28. cvinds = np.array_split(pindeces, k)
  29.  
  30. cvassigns = np.zeros(nsamples, dtype=int)
  31. for n, inds in enumerate(cvinds):
  32. cvassigns[inds] = n
  33.  
  34. return cvinds, cvassigns
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def fit(self, y, *args, **kwargs):
  2.  
  3. # set a different random seed for each thread
  4. np.random.seed(self.random_state + mpiops.chunk_index)
  5.  
  6. if self.parallel:
  7. process_rfs = np.array_split(range(self.forests),
  8. mpiops.chunks)[mpiops.chunk_index]
  9. else:
  10. process_rfs = range(self.forests)
  11.  
  12. for t in process_rfs:
  13. print(''training forest {} using ''
  14. ''process {}''.format(t, mpiops.chunk_index))
  15.  
  16. # change random state in each forest
  17. self.kwargs[''random_state''] = np.random.randint(0, 10000)
  18. rf = RandomForestTransformed(
  19. target_transform=self.target_transform,
  20. n_estimators=self.n_estimators,
  21. **self.kwargs
  22. )
  23. rf.fit(x, y)
  24. if self.parallel: # used in training
  25. pk_f = join(self.temp_dir,
  26. ''rf_model_{}.pk''.format(t))
  27. else: # used when parallel is false,i.e.,during x-val
  28. pk_f = join(self.temp_dir,
  29. ''rf_model_{}_{}.pk''.format(t, mpiops.chunk_index))
  30. with open(pk_f, ''wb'') as fp:
  31. pickle.dump(rf, fp)
  32. if self.parallel:
  33. mpiops.comm.barrier()
  34. # Mark that we are Now trained
  35. self._trained = True
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def kmean_distance2(x, C):
  2. """Compute squared euclidian distance to the nearest cluster centre
  3.  
  4. Parameters
  5. ----------
  6. x : ndarray
  7. (n,d) array of n d-dimensional points
  8. C : ndarray
  9. (k,d) array of k cluster centres
  10.  
  11. Returns
  12. -------
  13. d2_x : ndarray
  14. (n,) length array of distances from each x to the nearest centre
  15. """
  16. # To save memory we partition the computation
  17. nsplits = max(1, int(x.shape[0]/distance_partition_size))
  18. splits = np.array_split(x, nsplits)
  19. d2_x = np.empty(x.shape[0])
  20. idx = 0
  21. for x_i in splits:
  22. n_i = x_i.shape[0]
  23. D2_x = scipy.spatial.distance.cdist(x_i, metric=''sqeuclidean'')
  24. d2_x[idx:idx + n_i] = np.amin(D2_x, axis=1)
  25. idx += n_i
  26. return d2_x
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def compute_weights(x, C):
  2. """Number of points in x assigned to each centre c in C
  3.  
  4. Parameters
  5. ----------
  6. x : ndarray
  7. (n,d) array of k cluster centres
  8.  
  9. Returns
  10. -------
  11. weights : ndarray
  12. (k,) length array giving number of x closest to each c in C
  13. """
  14. nsplits = max(1, nsplits)
  15. closests = np.empty(x.shape[0], dtype=int)
  16. idx = 0
  17. for x_i in splits:
  18. n_i = x_i.shape[0]
  19. D2_x = scipy.spatial.distance.cdist(x_i, metric=''sqeuclidean'')
  20. closests[idx: idx+n_i] = np.argmin(D2_x, axis=1)
  21. idx += n_i
  22. weights = np.bincount(closests, minlength=C.shape[0])
  23. return weights
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def reseed_point(X, index):
  2. """ Re-initialise the centre of a class if it loses all its members
  3.  
  4. This should almost never happen. If it does,find the point furthest
  5. from all the other cluster centres and use that. Maybe a bad idea but
  6. a decent first pass
  7.  
  8. Parameters
  9. ----------
  10. X : ndarray
  11. (n,d) array of points
  12. C : ndarray
  13. (k,d) array of cluster centres
  14. index : int >= 0
  15. index between 0..k-1 of the cluster that has lost it''s points
  16.  
  17. Returns
  18. -------
  19. new_point : ndarray
  20. d-dimensional point for replacing the empty cluster centre.
  21. """
  22. log.info("Reseeding class with no members")
  23. nsplits = max(1, int(X.shape[0]/distance_partition_size))
  24. splits = np.array_split(X, nsplits)
  25. empty_index = np.ones(C.shape[0], dtype=bool)
  26. empty_index[index] = False
  27. local_candidate = None
  28. local_cost = 1e23
  29. for x_i in splits:
  30. D2_x = scipy.spatial.distance.cdist(x_i, metric=''sqeuclidean'')
  31. costs = np.sum(D2_x[:, empty_index], axis=1)
  32. potential_idx = np.argmax(costs)
  33. potential_cost = costs[potential_idx]
  34. if potential_cost < local_cost:
  35. local_candidate = x_i[potential_idx]
  36. local_cost = potential_cost
  37. best_pernode = mpiops.comm.allgather(local_cost)
  38. best_node = np.argmax(best_pernode)
  39. new_point = mpiops.comm.bcast(local_candidate, root=best_node)
  40. return new_point
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def __init__(self, shape, bBox, crs, name, n_subchunks, outputdir,
  2. band_tags=None):
  3. # affine
  4. self.A, _, _ = image.bBox2affine(bBox[1, 0], bBox[0,
  5. bBox[0, 1], bBox[1,
  6. shape[0], shape[1])
  7. self.shape = shape
  8. self.outbands = len(band_tags)
  9. self.bBox = bBox
  10. self.name = name
  11. self.outputdir = outputdir
  12. self.n_subchunks = n_subchunks
  13. self.sub_starts = [k[0] for k in np.array_split(
  14. np.arange(self.shape[1]),
  15. mpiops.chunks * self.n_subchunks)]
  16.  
  17. # file tags don''t have spaces
  18. if band_tags:
  19. file_tags = ["_".join(k.lower().split()) for k in band_tags]
  20. else:
  21. file_tags = [str(k) for k in range(self.outbands)]
  22. band_tags = file_tags
  23.  
  24. if mpiops.chunk_index == 0:
  25. # create a file for each band
  26. self.files = []
  27. for band in range(self.outbands):
  28. output_filename = os.path.join(outputdir, name + "_" +
  29. file_tags[band] + ".tif")
  30. f = Rasterio.open(output_filename, ''w'', driver=''GTiff'',
  31. width=self.shape[0], height=self.shape[1],
  32. dtype=np.float32, count=1,
  33. crs=crs,
  34. transform=self.A,
  35. nodata=self.nodata_value)
  36. f.update_tags(1, image_type=band_tags[band])
  37. self.files.append(f)
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def gdalaverage(input_dir, out_dir, size):
  2. """
  3. average data using gdal''s averaging method.
  4. Parameters
  5. ----------
  6. input_dir: str
  7. input dir name of the tifs that needs to be averaged
  8. out_dir: str
  9. output dir name
  10. size: int,optional
  11. size of kernel
  12. Returns
  13. -------
  14.  
  15. """
  16. input_dir = abspath(input_dir)
  17. log.info(''Reading tifs from {}''.format(input_dir))
  18. tifs = glob.glob(join(input_dir, ''*.tif''))
  19.  
  20. process_tifs = np.array_split(tifs, mpiops.chunks)[mpiops.chunk_index]
  21.  
  22. for tif in process_tifs:
  23. data_set = gdal.Open(tif, gdal.GA_ReadOnly)
  24. # band = data_set.GetRasterBand(1)
  25. # data_type = gdal.GetDataTypeName(band.DataType)
  26. # data = band.ReadAsArray()
  27. # no_data_val = band.GetNoDataValue()
  28. # averaged_data = filter_data(data,size,no_data_val)
  29. log.info(''Calculated average for {}''.format(basename(tif)))
  30.  
  31. output_file = join(out_dir, ''average_'' + basename(tif))
  32. src_gt = data_set.GetGeoTransform()
  33. tmp_file = ''/tmp/tmp_{}.tif''.format(mpiops.chunk_index)
  34. resample_cmd = [TRANSLATE] + [tif, tmp_file] + \\
  35. [''-tr'', str(src_gt[1]*size), str(src_gt[1]*size)] + \\
  36. [''-r'', ''bilinear'']
  37. check_call(resample_cmd)
  38. rollback_cmd = [TRANSLATE] + [tmp_file, output_file] + \\
  39. [''-tr'', str(src_gt[1]), str(src_gt[1])]
  40. check_call(rollback_cmd)
  41. log.info(''Finished converting {}''.format(basename(tif)))
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def mean(input_dir, size, func, partitions, mask):
  2. input_dir = abspath(input_dir)
  3. if isdir(input_dir):
  4. log.info(''Reading tifs from {}''.format(input_dir))
  5. tifs = glob.glob(join(input_dir, ''*.tif''))
  6. else:
  7. assert isfile(input_dir)
  8. tifs = [input_dir]
  9.  
  10. process_tifs = np.array_split(tifs, mpiops.chunks)[mpiops.chunk_index]
  11.  
  12. for tif in process_tifs:
  13. log.info(''Starting to average {}''.format(basename(tif)))
  14. treat_file(tif, mask)
  15. log.info(''Finished averaging {}''.format(basename(tif)))
项目:uncover-ml    作者:GeoscienceAustralia    | 项目源码 | 文件源码
  1. def inspect(input_dir, report_file, extension):
  2. input_dir = abspath(input_dir)
  3. if isdir(input_dir):
  4. log.info(''Reading tifs from {}''.format(input_dir))
  5. tifs = glob.glob(join(input_dir, ''*.'' + extension))
  6. else:
  7. log.info(''Reporting geoinfo for {}''.format(input_dir))
  8. tifs = [input_dir]
  9.  
  10. with open(report_file, newline='''') as csvfile:
  11. writer = csv.writer(csvfile, dialect=''excel'')
  12. writer.writerow([''FineName'', ''band'', ''NoDataValue'', ''rows'', ''cols'',
  13. ''Min'', ''Max'', ''Mean'', ''Std'',
  14. ''DataType'', ''Categories'', ''NanCount''])
  15. process_tifs = np.array_split(tifs, mpiops.chunks)[mpiops.chunk_index]
  16.  
  17. stats = [] # process geotiff stats including multibanded geotif
  18. for t in process_tifs:
  19. stats.append(get_stats(t, partitions))
  20.  
  21. # gather all process geotif stats in stats dict
  22. stats = _join_dicts(stats)
  23.  
  24. # global gather in root
  25. stats = _join_dicts(mpiops.comm.gather(stats, root=0))
  26.  
  27. if mpiops.chunk_index == 0:
  28. for k, v in stats.items():
  29. write_rows(v, writer)

'np.array_split()' 返回的块是否按大小降序排列?

'np.array_split()' 返回的块是否按大小降序排列?

如何解决''np.array_split()'' 返回的块是否按大小降序排列?

在使用整数的 numpy.array_split 中,当部分的数量不是所考虑的轴上的大小的除数时,某些部分可能会更小或更大,例如

import numpy as np
[chunk.shape[0] for chunk in np.array_split(np.arange(12),5)]

返回块大小:[3,3,2,2]

虽然 the documentation 没有提到它,但看起来最小的块在列表的末尾。并尝试获取样本证实,对于最多 200 个元素的数组,无论所需的块数是多少。

import numpy as np    
not_ordered = 0
for sample_size in np.arange(2,200):
    a = np.arange(sample_size)
    for n in np.arange(2,sample_size//2):
        chunks = np.array_split(a,n)
        sizes = [chunk.shape[0] for chunk in chunks]
        for i in np.arange(1,len(sizes)):
            if sizes[i] > sizes[i-1]:
                not_ordered += 1
                break
print (f''Not ordered: {not_ordered}'')

函数背后的算法是否保证了降序?或者这是在使用返回的结果时不能指望的东西?

解决方法

numpy.array_split 文档说

对于一个长度为 l 的数组,它应该被分成 n 个部分,它 返回 l % n 个大小为 l//n + 1 和其余大小为 l//n 的子数组。

由于 ln 在每次运行中都是固定的,我们可以得出结论,对于返回数组中的每个元素,下一个元素将不会超过当前元素。

编辑:如有疑问,因为这是python,我们可以read the code。如果 indices_or_sections 是整数相关部分,那么您很感兴趣:

Nsections = int(indices_or_sections)
if Nsections <= 0:
    raise ValueError(''number sections must be larger than 0.'')
Neach_section,extras = divmod(Ntotal,Nsections)
section_sizes = ([0] +
                 extras * [Neach_section+1] +
                 (Nsections-extras) * [Neach_section])
div_points = _nx.array(section_sizes,dtype=_nx.intp).cumsum()

其中 Ntotal 是输入数组中的元素数。如您所见,有 Neach_section+1 后跟 Neach_sections

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

如何解决Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: ''numpy.ndarray'' object is not callable?

晚安, 尝试打印以下内容时,我在 jupyter 中遇到了 numpy 问题,并且得到了一个 错误: 需要注意的是python版本是3.8.8。 我先用 spyder 测试它,它运行正确,它给了我预期的结果

使用 Spyder:

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
Results
0.6604903457995978
0.8236300859753154
0.16067650689842816
0.6967868357083673
0.4231597934445466

现在有了 jupyter

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
-------------------------------------------------- ------
TypeError Traceback (most recent call last)
<ipython-input-78-0c6a801b3ea9> in <module>
       2 for i in range (5):
       3 n = np.random.rand ()
---->  4 print (n)

       TypeError: ''numpy.ndarray'' object is not callable

感谢您对我如何在 Jupyter 中解决此问题的帮助。

非常感谢您抽出宝贵时间。

阿特,约翰”

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

np.split()和np.array_split()

np.split()和np.array_split()

来自:爱抠脚的coder

np.split():

该函数的参数要么按照数字划分(int),要么是按列表list划分:如果仅是输入一个int类型的数字,你的数组必须是均等的分割,否则会报错。

np.array_split():

array_split()可以进行不均等划分

按列表中的数字,在3,5,6,10位置处分割。

 

一旦不均等就会报错:

x = np.arange(8)
y = np.split(x, 3)

print(y)

 报错为:

ValueError: array split does not result in an equal division

 

不均等划分:

对于长度为l的数组,分割成n个部分,它返回l % n个大小为(l // n) + 1的子数组,以及其他大小为(l // n)的子数组。

25对7取余是4,所以返回4个大小为(25//7)+1的子数组,3个大小为(25//7)的子数组。

 

numpy.random.random & numpy.ndarray.astype & numpy.arange

numpy.random.random & numpy.ndarray.astype & numpy.arange

今天看到这样一句代码:

xb = np.random.random((nb, d)).astype(''float32'') #创建一个二维随机数矩阵(nb行d列)
xb[:, 0] += np.arange(nb) / 1000. #将矩阵第一列的每个数加上一个值

要理解这两句代码需要理解三个函数

1、生成随机数

numpy.random.random(size=None) 

size为None时,返回float。

size不为None时,返回numpy.ndarray。例如numpy.random.random((1,2)),返回1行2列的numpy数组

 

2、对numpy数组中每一个元素进行类型转换

numpy.ndarray.astype(dtype)

返回numpy.ndarray。例如 numpy.array([1, 2, 2.5]).astype(int),返回numpy数组 [1, 2, 2]

 

3、获取等差数列

numpy.arange([start,]stop,[step,]dtype=None)

功能类似python中自带的range()和numpy中的numpy.linspace

返回numpy数组。例如numpy.arange(3),返回numpy数组[0, 1, 2]

关于Python numpy 模块-array_split() 实例源码python中numpy模块的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于'np.array_split()' 返回的块是否按大小降序排列?、Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、np.split()和np.array_split()、numpy.random.random & numpy.ndarray.astype & numpy.arange的相关知识,请在本站寻找。

本文标签: