Python least squares with scipy.integrate.quad
By : subhajit ghosh
Date : March 29 2020, 07:55 AM
To fix this issue With funeval(90.,0.001, 0.0002), Temp is a singular value; however, when you are calling scipy.optimize you are passing the entire T array to funeval causing scipy.integrate to crash. A quick fix would be to do something like: code :
def funeval(Temp,eps,sig):
out=[]
for T in Temp:
val = scipy.integrate.quad( lambda x: np.expm1( ((4.*eps)/T)* ((sig/x)**12.(sig/x)**6.)* (x**2.) ), 0.0, np.inf )[0]
out.append(val)
return np.array(out)
def residuals(p,y,Temp):
eps,sig = p
err = y(funeval(Temp,eps,sig) )
return err
print funeval([90],0.001, 0.0002)
plsq = scipy.optimize.leastsq(residuals, [0.00001, 0.0002], args=(B, T))
(array([ 3.52991175e06, 9.04143361e02]), 1)

Python scipy.signal.remez high pass filter design yields strange transfer function
By : Camilo Andrés Rosero
Date : March 29 2020, 07:55 AM
like below fixes the issue For a highpass filter with the default remez argument type='bandpass', use an odd number of taps. With an even number of taps, remez creates a Type II filter, which has a zero at the Nyquist frequency. The algorithm has a hard time creating a highpass filter with such a constraint. Here's a plot of the gain when L = 41:

Weighted random sample without replacement in python
By : Rass Tarro
Date : March 29 2020, 07:55 AM
To fix this issue I need to obtain a ksized sample without replacement from a population, where each member of the population has a associated weight (W). , You can use np.random.choice with replace=False as follows: code :
np.random.choice(vec,size,replace=False, p=P)
import numpy as np
vec=[1,2,3]
P=[0.5,0.2,0.3]
np.random.choice(vec,size=2,replace=False, p=P)

Scipy.curve_fit() vs. Matlab fit() weighted nonlinear least squares
By : user3423484
Date : March 29 2020, 07:55 AM
seems to work fine Ok, so after further investigation I can offer the answer, at least for this simple example. code :
import numpy as np
import scipy as sp
import scipy.optimize
def modelFun(x, m, b):
return m * x + b
def testFit():
w = np.diag([1.0, 1/0.7290, 1/0.5120, 1/0.3430, 1/0.2160, 1/0.1250, 1/0.0640, 1/0.0270, 1/0.0080, 1/0.0010])
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([0.1075, 1.3668, 1.5482, 3.1724, 4.0638, 4.7385, 5.9133, 7.0685, 8.7157, 9.5539])
popt = sp.optimize.curve_fit(modelFun, x, y, sigma=w)
print(popt[0])
print(popt[1])

Unexpected standard errors with weighted least squares in Python Pandas
By : Joshua Smith
Date : March 29 2020, 07:55 AM
it helps some times Not directly answering your question here, but, in general, you should prefer the statsmodels code to pandas for modeling. There were some recently discovered problems with WLS in statsmodels that are now fixed. AFAIK, they were also fixed in pandas, but for the most part the pandas modeling code is not maintained and the medium term goal is to make sure everything available in pandas is deprecated and has been moved to statsmodels (next release 0.6.0 for statsmodels should do it). To be a little clearer, pandas is now a dependency of statsmodels. You can pass DataFrames to statsmodels or use formulas in statsmodels. This is the intended relationship going forward.

