C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD

What's the fastest way to approximate the period of data using Octave?

By : rik84
Date : November 21 2020, 07:01 PM
With these it helps Take a look at the auto correlation function.
From Wikipedia
code :

Share :

Is Approximate Nearest Neighbour the fastest feature matching in Computer Vision?

By : M Tawfik
Date : March 29 2020, 07:55 AM
it should still fix some issue I'd say that Euclidean distnace based nearest neighbor would be the easiest to implement, but not necessarily the fastest.
I'd agree that approximate nearest neighbor or 'best bin first' would be the quickest at identifying which image in your background set most closely resembles the probe image.

Fastest approximate counting algorithm

By : Rohit Sharma
Date : March 29 2020, 07:55 AM
like below fixes the issue @Ben Allison's answer is a good way if you want to count the total lines. Since you mentioned the Bayes and the prior, I will add an answer in that direction to calculate the percentage of different groups. (see my comments on your question. I guess if you have an idea of the total and if you want to do a groupby, to estimate the percentage of different groups makes more sense).
The recursive Bayesian update:
code :
``````if group == 'group1':
alpha = alpha + 1
else:
beta = beta + 1
``````
``````                s^(m+alpha-1) (1-s)^(n-m+beta-1)
p(s| M(m,n)) = ----------------------------------- = Beta (m+alpha, n-m+beta)
B(m+alpha, n-m+beta)
``````
``````mean = alpha/(alpha+beta)
var = alpha*beta/((alpha+beta)**2 * (alpha+beta+1))
``````
``````import sys

alpha = 1.
beta = 1.

for line in sys.stdin:
data = line.strip()
if data == 'group1':
alpha += 1.
elif data == 'group2':
beta += 1.
else:
continue

mean = alpha/(alpha+beta)
var = alpha*beta/((alpha+beta)**2 * (alpha+beta+1))
print 'mean = %.3f, var = %.3f' % (mean, var)
``````
``````group1
group1
group1
group1
group2
group2
group2
group1
group1
group1
group2
group1
group1
group1
group2
``````
``````mean = 0.667, var = 0.056
mean = 0.750, var = 0.037
mean = 0.800, var = 0.027
mean = 0.833, var = 0.020
mean = 0.714, var = 0.026
mean = 0.625, var = 0.026
mean = 0.556, var = 0.025
mean = 0.600, var = 0.022
mean = 0.636, var = 0.019
mean = 0.667, var = 0.017
mean = 0.615, var = 0.017
mean = 0.643, var = 0.015
mean = 0.667, var = 0.014
mean = 0.688, var = 0.013
mean = 0.647, var = 0.013
``````
``````head -n100000 YOURDATA.txt | python groupby.py
``````

Whats the highest (approximate) request rate DRb can handle?

By : Simon Leen
Date : March 29 2020, 07:55 AM
wish help you to fix your issue I'm using DRb for relatively infrequent interprocess communication now, but I'm worried that it may not be able to handle the load if my service grows, especially given things like spawining a new thread to deal with every request. Anybody have experience dealing with DRb's upper limits and can tell me at approximately what load it started causing problems? what would be a better way of dealing with this, perhaps a thread running sinatra? , Run a performance test on it, and test for yourself.
code :
``````require 'benchmark'
Benchmark.bm do |x|
x.report {100000.times {"Do DRb request here"}}
end
``````

FFT in Octave/Matlab, Plot cos(x) and approximate with

By : AnshuSophie
Date : March 29 2020, 07:55 AM
Any of those help I am also confused about what is the question actually. I guess you want to plot that formula.
Have a row vector k = 0:(n/2 - 1) (suppose n even).
code :
``````leftterm = sum(d .* exp(i * x * k), 2)
``````
``````rightterm = sum( fliplr(d) .* exp(- i * x * (k.+1)), 2)
``````
``````f = sum(d .* exp(i * x * k) + fliplr(d) .* exp(- i * x * (k .+ 1)), 2)
``````
``````plot(x,f)
``````

whats the fastest way to insert streaming data into a table, DB is MS SQL server 2008

By : AUG
Date : March 29 2020, 07:55 AM
Hope this helps Look at SqlBulkCopy - this allows fast inserts of multiple rows of data. You could buffer a few thousand rows and insert them periodically.