Tags
 IOS SQL HTML C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6

# When is the sum of series 1+1/2+1/3+1/4+......+1/n=log n and when is the same sum equal to n, i.e. 1+1/2+1/3+1/4+......+

By : Achim
Date : September 12 2020, 02:00 AM
this one helps. For the former, the inner loop runs approximately (exactly if n is a power of 2)
code :
``````n + n/2 + n/4 + n/8 + ... + n/2^log2(n)
``````
``````n * (1 + 1/2 + 1/4 + 1/8 + ...  + (1/2)^(log2 n))
``````
``````ceil(n) + ceil(n / 2) + ceil(n/3) + ... + ceil(n/n)
``````
``````n * (1 + 1/2 + 1/3 + 1/4 + ... 1/n)
``````

Share :

## Checking if two Objects containing Series are equal

By : SuperG
Date : March 29 2020, 07:55 AM
may help you . Don't you just want to check if the Series are equal? Assuming you're talking about Pandas.Series.
Using the Series.equals() function.
code :
``````def __eq__(self, other):
'''
Series being equal
'''
if self.amounts.equals(other.amounts)
print('Series are equal')
return(1)
else:
print('Series are not equal')
return(0)
``````

## Why can't I set a series type to equal another series type with Python pandas

By : wrightmac
Date : March 29 2020, 07:55 AM
I hope this helps you . When you assign a Series to a DataFrame column, pandas matches the new values according to the index. Your original DataFrame presumably has some meaningful index, but your new Series it just has the default index of 0, 1, 2, 3... because those are the keys in your dictionary. Here is a simple example:
code :
``````>>> d = pandas.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=[10, 11, 12])
>>> d
A  B
10  1  4
11  2  5
12  3  6
>>> d["C"] = pandas.Series([8, 88, 888])
>>> d
A  B   C
10  1  4 NaN
11  2  5 NaN
12  3  6 NaN
>>> d["C"] = pandas.Series([8, 88, 888], index=[10, 11, 12])
>>> d
A  B    C
10  1  4    8
11  2  5   88
12  3  6  888
``````
``````results['CreationDate'] = results['CreationDate'].map(pandas.to_datetime)
``````

## Add value from series index to row of equal value in Pandas DataFrame

By : keky
Date : March 29 2020, 07:55 AM
around this issue I managed to find an answer to my question that is quite a bit nicer than my original approach as well:
code :
``````df = df.groupby('TripID').filter(lambda x: len(x) > 2)
``````

## get count of entries less or equal in Series

By : Dom Manuel
Date : March 29 2020, 07:55 AM
hope this fix your issue Faster is numpy solution - convert Series to numpy array and compare by broadcasting to 2d array, last count True values by sum:
code :
``````b = a.values
#pandas 0.24+
#b = a.to_numpy()
le = pd.Series((b <= b[:, None]).sum(axis=1), index=a.index)
``````
``````print (b <= b[:, None])
[[ True False  True False  True  True  True False]
[ True  True  True  True  True  True  True  True]
[False False  True False  True  True  True False]
[ True False  True  True  True  True  True False]
[False False False False  True  True  True False]
[False False False False False  True  True False]
[False False False False False  True  True False]
[ True False  True  True  True  True  True  True]]
``````
``````le = pd.Series([a.le(i).sum() for i in a])
``````
``````le = a.apply(lambda i: a.le(i).sum())
``````
``````print(le)
0    5
1    8
2    4
3    6
4    3
5    2
6    2
7    7
dtype: int64
``````
``````np.random.seed(2019)
N = 10**6
s = pd.Series(np.random.randint(100, size=N))
#print (s)
``````
``````In [173]: %%timeit
...: b = a.values
...: le = pd.Series((b <= b[:, None]).sum(axis=1), index=a.index)
...:
78.6 µs ± 510 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [174]: %%timeit
...: le = pd.Series([a.le(i).sum() for i in a])
...:
3.22 ms ± 136 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [175]: %%timeit
...: le = a.apply(lambda i: a.le(i).sum())
...:
3.35 ms ± 290 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [176]: %%timeit
...: a.apply(lambda x: a[a.le(x)].count())
...:
...:
5.41 ms ± 457 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [177]: %%timeit
...: le = pd.Series(data=[a[a <= i].count() for i in a])
...:
4.91 ms ± 281 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
``````

## Remove series in array if ALL values equal 0

By : user2994339
Date : March 29 2020, 07:55 AM
this will help One way that I would do is get all the ifDescs first. After you gather all of em, you can start processing the actual array to remove batches of ifDescs that has zero sum.
code :
``````\$ifDescs = array_unique(array_column(\$rows, 'ifDesc')); // get all ifDescs
foreach (\$ifDescs as \$currentIfDesc) { // loop each ifDesc batches
\$current_batch = []; // set initial batch of ifDesc type for temporary container
foreach (\$rows as \$k => \$row) { // group them first
if (\$row['ifDesc'] === \$currentIfDesc) {
\$current_batch[\$k] = \$row;
}
}
\$non_zero_octets = array_sum(array_column(\$current_batch, 'Octets')) > 0; // get all octets of the current batch iteration, and check if its greater than one when summed
if (!\$non_zero_octets) { // if not greater than zero
\$rows = array_diff_key(\$rows, array_keys(\$current_batch)); // remove em using all the keys
}
}
``````