Welcome Guest! To enable all features please Login. New Registrations are disabled.

Notification

Icon
Error

Login


14 Pages«<45678>»
Options
Go to last post Go to first unread
Offline Davide Carpi  
#101 Posted : 12 October 2012 02:24:32(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Originally Posted by: mkraska Go to Quoted Post
Hi,

I did some simple testing, here are the results.
The screenshot is generated with standard settings for dec and arg delims.

Broyden and HRE.B generate errors with non-standard settings (dec , and arg Wink. All other errors seem not to be custom settings related.

Quite interesting that solve(4) outperforms them all in the given example.

Most solvers that take intervals as arguments just allow one root in that interval.

Best regards, Martin


Thank you again Martin Good

see the attachment, the issues are fixed with the plugin in the following post.


regards,

w3b5urf3r
Davide Carpi attached the following image(s):
PrtScr capture.png
If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline Davide Carpi  
#102 Posted : 12 October 2012 02:28:36(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
FOR TESTING PURPOSES ONLY Beta

Hi all,

in the attached plugin:
- new functions NCGM(...) and NCGM.CD(...) - Nonlinear conjugate gradient method optimization algorithms;
- new function Gradient.CD(...) - central differences based Gradient;
- new function Jacobian.CD(...) - central differences based Jacobian;
- new function Hessian.CD(...) - central differences based Hessian;
- new functions GaussNewton.CD(...) and GaussNewton.CDGSS(...) - central differences based GaussNewton;
- new function LevenbergMarquardt.CD(...) - central differences based LevenbergMarquardt;
- new function NewtonRaphson.CD(...) - central differences based NewtonRaphson;
- new functions NewtonMethod.CD(...) and NewtonMethod.CDGSS(...) - central differences based NewtonMethod;
- increased performances on Gradient/Jacobian/Hessian-based functions;
- fixed issues of Broyden(...) and HRE.B(...) about custom settings;
- minor changes.

from previous BETA:
- new function GaussNewton(...) - Gauss-Newton optimization algorithm (previous BETA - increased performances);
- new functions GoldenSectionSearch.min(...) and GoldenSectionSearch.max(...) - Golden Section Search minimization/maximization algorithms (previous BETA - no changes);
- new functions GradientDescent(...) and GradientDescent.GSS(...) - Gradient Descent optimization algorithm (respectively with fixed step length and GoldenSectionSearch-based step length) (previous BETA - increased performances);
- new function LevenbergMarquardt(...) - Levenberg-Marquardt optimization algorithm (previous BETA - increased performances);
- new functions NewtonMethod(...) and NewtonMethod.GSS(...) - Newton Method optimization algorithm (respectively with fixed step length and GoldenSectionSearch-based step length) (previous BETA - increased performances);
- new function Diag(...) - improved SMath diag() (previous BETA - no changes);
- new function Gradient(...) - 1st order derivatives (previous BETA - increased performances);
- new function Hessian(...) - 2nd order derivatives (previous BETA - increased performances);
- function Jacobian(...) revisited (now returns only a derivative or a mxn Jacobian) (previous BETA - increased performances);
- solver Bisection(...) revisited (the number of iterations is no longer required, as reported by adiaz) (previous BETA - no changes);
- All root-finding algorithms in k variable now accept multiple thresholds (a target precision value for each function) (previous BETA - no changes);
- Fixed "custom decimal symbol" issue of HRE functions. (previous BETA - no changes);

REQUIRES customFunctions plugin

SEE THE ATTACHMENTS FOR MANY INFOS - PLEASE REPORT ANY ISSUE



best regards,

w3b5urf3r

Edited by user 12 January 2013 00:04:46(UTC)  | Reason: requirements

File Attachment(s):
BETA_NonLinearSolvers.dll.zip (19kb) downloaded 90 time(s).
BETA_testing sm files.zip (32kb) downloaded 89 time(s).
If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
thanks 1 user thanked Davide Carpi for this useful post.
on 12/10/2012(UTC)
Offline mkraska  
#103 Posted : 12 October 2012 10:57:31(UTC)
mkraska


Rank: Advanced Member

Groups: Registered
Joined: 15/04/2012(UTC)
Posts: 1,986
Germany

Was thanked: 1124 time(s) in 721 post(s)
Hi w3b5urf3r,

thank you for the instant response. I confirm the points mentioned in your screenshot of post #101. Special thanks for your patience with custom settings. This aspect must be really annoying for those who use standard settings. I am insisting because I an teaching to german engineering students who are supposed to respect DIN standards.

In your HRE.RK example you used a precision with ° unit. That makes me wonder if I got the precision argument usage right. Is it a tolerance to the functions or to the variables? Both could be sensible convergence criteria. A given variable tolerance might be sensible if you solve for the displacements of a mechanical system based on equillibrium conditions. You might be satisfied with being within 1mm from the solution regardless how big the residual forces are.

I did some timing and convergence testing, see attachments. The results are a little surprising.

Best regards, Martin
File Attachment(s):
trig1.sm (17kb) downloaded 43 time(s).
mkraska attached the following image(s):
trig1.png
Martin Kraska

Pre-configured portable distribution of SMath Studio: https://smath.com/wiki/SMath_with_Plugins.ashx
Offline Davide Carpi  
#104 Posted : 12 October 2012 11:46:53(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Originally Posted by: mkraska Go to Quoted Post
Hi w3b5urf3r,

thank you for the instant response. I confirm the points mentioned in your screenshot of post #101. Special thanks for your patience with custom settings. This aspect must be really annoying for those who use standard settings. I am insisting because I an teaching to german engineering students who are supposed to respect DIN standards.

In your HRE.RK example you used a precision with ° unit. That makes me wonder if I got the precision argument usage right. Is it a tolerance to the functions or to the variables? Both could be sensible convergence criteria. A given variable tolerance might be sensible if you solve for the displacements of a mechanical system based on equillibrium conditions. You might be satisfied with being within 1mm from the solution regardless how big the residual forces are.

I did some timing and convergence testing, see attachments. The results are a little surprising.

Best regards, Martin


Hi Martin,

No problem, regional settings it's an important feature of SMath and I want the plugin give less problems as possible.

In all the root finding methods the convergence it's set with reference to the function value: f(x.target)<epsilon (BTW any testing file contains the structure of the function, so you can easily understand the behavior of any input argument). The degree in the convergence argument in the picture it's is my forgetfulness, I was checking that the arguments accepts the input in degrees, I had forgotten to remove Confusion (screenshot updated)

Bracketed algorithms accept a 2nd convergence criterion on bracket width, further convergence criteria will be added ASAP.

Please be careful with homotopy estimation methods... the 4th arguments represents the homotopy transformations (ΔλSad1-0)/homotopyTransformations); any transformation involve a root-finding call, so more tranformations->more calls->most computation time; considere that 10 transformations (or less) are usually enough.

regards,

w3b5urf3r

Edited by user 12 October 2012 12:57:46(UTC)  | Reason: Not specified

If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline mkraska  
#105 Posted : 12 October 2012 13:50:51(UTC)
mkraska


Rank: Advanced Member

Groups: Registered
Joined: 15/04/2012(UTC)
Posts: 1,986
Germany

Was thanked: 1124 time(s) in 721 post(s)
Originally Posted by: w3b5urf3r_reloaded Go to Quoted Post

Bracketed algorithms accept a 2nd convergence criterion on bracket width, further convergence criteria will be added ASAP.

Please be careful with homotopy estimation methods... the 4th arguments represents the homotopy transformations (ΔλSad1-0)/homotopyTransformations); any transformation involve a root-finding call, so more tranformations->more calls->most computation time; considere that 10 transformations (or less) are usually enough.


Hi w3b5urf3r,

thanks again. I played around with the bracketed root finding algorithms based on your Bisection test file. The precision argument is either a number or a two element list, each element being again a number. I guess/propose that in further releases these elements might be lists, giving tolerances for the individual functions (first list) and for the variables (second list). Secant() is complaining about type of arguments.

The flexibility of your functions in terms of how the system is formatted and the mechanism of identifying the unknowns is really impressive and highly welcome.
BTW: How can I remove definitions from the namespace, i.e. make previously defined names unknown?

My surprise with the HRE test was not so much about the high computation time of HRE.NR but in that increasing a non-eaten-up maximum changes the number of required iteration.

The ordinary user dreams of a wrapper function that by some logic uses whatever solver might be appropriate. A brute force approach would be to try all available variants with increasing number of allowed iterations, perhaps with some sort of trace protocol such that could gain some experience by playing around.

Best regards, Martin
File Attachment(s):
Bisections_testing_Kr.sm (62kb) downloaded 48 time(s).
mkraska attached the following image(s):
Bracketed.PNG
Martin Kraska

Pre-configured portable distribution of SMath Studio: https://smath.com/wiki/SMath_with_Plugins.ashx
Offline omorr  
#106 Posted : 12 October 2012 16:39:54(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
Hello w3b5urf3r,

I was playing a bit with optimization .CD functions - of course with my "famous" NLS example. The calculation time decreases on some more acceptable level (order of magnitude - few minutes each) and I could afford more runs for this testing. It seems that for this example the best performance had GausNewton.CD, NewtonMethod.CD and NewtonMethod.CDGSS. Morover, NewtonMethod.CDGSS performed the best (with a bit longer calculation time) - just compare with the result obtained by my LMA() function in the same file. I was expecting much more from LevenbergMarquardt() - to be quite good at this example, but it seems I was wrong. It was quite slow and rather inefficient. Actually, CD functions did some quite good job compared to the original ones. Hope that some more improvements could be done here Good

The attached are two files (worse and better initial conditions).
The picture is for the worse ones.

Regards,
Radovan


File Attachment(s):
NLMinimization-5.sm (101kb) downloaded 51 time(s).
NLMinimization-5a.sm (101kb) downloaded 58 time(s).
omorr attached the following image(s):
NLMinimization-5.png
When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
Offline omorr  
#107 Posted : 12 October 2012 21:03:19(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
CONTINUE...

Here is another attempt, with the same problem but now directly minimizing the following standard sum of squared residuals - function S(x,y,z):

S(x,y,z):sum(((el(p,i)-10^{x+y/el(T,i)+z*log(el(T,i),10)})^2),i,1,n)

Again, it seems that NewtonMethod.CDGSS() performed the best for this example. Here are two files with better and worse initial conditions. The picture represented solutions with better initial conditions. I tried with different number of points (reduced the variable n in front of the function S(x,y,z) and the situation seems to be similar. NewtonMethod.CDGSS() was the best.

Those functions are promising requiring less computational time and hope that this testing and examples could give you some ideas how to increase the performance of those functions Good

Regards,
Radovan


File Attachment(s):
NLMinimization-6.sm (116kb) downloaded 47 time(s).
NLMinimization-6a.sm (116kb) downloaded 41 time(s).
omorr attached the following image(s):
NLMinimization-6.png
When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
Offline omorr  
#108 Posted : 12 October 2012 22:48:26(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
Hello w3b5urf3r,

It seems you've done something with the numerical derivatives - it seems quite accurate Good
Just for comparison see the post and the picture above regarding spline and spline derivatives . Jacobian.CD() - Gradient.CD() did the job quite well Good

Thank you very much Good

Regards,
Radovan

Edited by user 12 October 2012 22:50:59(UTC)  | Reason: Not specified

omorr attached the following image(s):
derivatives.png
When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
Offline omorr  
#109 Posted : 13 October 2012 14:41:00(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
Hello w3b5urf3r

Instead of minimizing sum of squared residuals, I've tried now to use individual residuals in order to minimize them, that will sometimes work (Levenberg Marquard should do the job here). It appeared that it worked well and quite fast for most of the methods Good . See the attached files *7.sm and *7a.sm (better and worse guess values)

On the other hand, the squared residuals minimization is represented in files *8.sm and *8a.sm. It also worked quite well Good . Therefore, I am quite satisfied with these CD functions and this example solved in this way. Will see some other example next time.

Regards,
Radovan

P.S. It seems that NewtonRaphson.CD() is something wrong. I could not solve my test problem properly, as I was expected based on the previous post regarding numerical Jacobian. On the other hand, in your test file, NewtonRaphson() solved the problem in 7 iteration but NewtonRaphson.CD() solved the same problem in 344 iteration ?

Edited by user 13 October 2012 14:43:12(UTC)  | Reason: Not specified

File Attachment(s):
NLMinimization-7.sm (134kb) downloaded 43 time(s).
NLMinimization-7a.sm (133kb) downloaded 39 time(s).
NLMinimization-8.sm (134kb) downloaded 38 time(s).
NLMinimization-8a.sm (133kb) downloaded 43 time(s).
omorr attached the following image(s):
NLMinimization-78.png
When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
Offline Davide Carpi  
#110 Posted : 13 October 2012 17:41:41(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Originally Posted by: omorr Go to Quoted Post
Hello w3b5urf3r

Instead of minimizing sum of squared residuals, I've tried now to use individual residuals in order to minimize them, that will sometimes work (Levenberg Marquard should do the job here). It appeared that it worked well and quite fast for most of the methods Good . See the attached files *7.sm and *7a.sm (better and worse guess values)

On the other hand, the squared residuals minimization is represented in files *8.sm and *8a.sm. It also worked quite well Good . Therefore, I am quite satisfied with these CD functions and this example solved in this way. Will see some other example next time.

Regards,
Radovan

P.S. It seems that NewtonRaphson.CD() is something wrong. I could not solve my test problem properly, as I was expected based on the previous post regarding numerical Jacobian. On the other hand, in your test file, NewtonRaphson() solved the problem in 7 iteration but NewtonRaphson.CD() solved the same problem in 344 iteration ?

Hi omorr,

thank you for your testings Good

You're right, the finite difference newton-raphson contains a bug, now is solved (BETA plugin updated).

EDIT Levenberg-Marquardt and Gauss-Newton general algorithms are built to work knowing both the system of equations (the residuals) and the cost function (the sum of squared residuals; the plugin calculated the sum o squares of systems inside any algorithm); the LMA(...) in your scripts accept the sum of squared residuals as input because have a custom Jacobian inside it;
GradientDescent(...), NewtonMethod(...) and NCGM(...) need only the cost function, so you can use as input what do you want.

Originally Posted by: omorr Go to Quoted Post
Hello w3b5urf3r,

It seems you've done something with the numerical derivatives - it seems quite accurate Good
Just for comparison see the post and the picture above regarding spline and spline derivatives . Jacobian.CD() - Gradient.CD() did the job quite well Good

Thank you very much Good

Regards,
Radovan

I'm surprised Wonder ; I've done nothing strange, just a "generalization" of Finite difference in several variables... maybe have you changed some parameter in your test (f.e. the perturbation value)?


regards,

w3b5urf3r

Edited by user 14 October 2012 00:41:44(UTC)  | Reason: Not specified

If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline omorr  
#111 Posted : 14 October 2012 10:33:11(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
Hello w3b5urf3r,
As your fan Biggrinrinks: , I am going to bother you some more Good

Thank you for correcting NewtonRaphson.CD(). It seems to work and the attached is the picture from my many times mentioned "nightmare example". It solved it very well Good. Take a look at the first picture attached.

Originally Posted by: w3b5urf3r_reloaded Go to Quoted Post
EDIT Levenberg-Marquardt and Gauss-Newton general algorithms are built to work knowing both the system of equations (the residuals) and the cost function (the sum of squared residuals; the plugin calculated the sum o squares of systems inside any algorithm); the LMA(...) in your scripts accept the sum of squared residuals as input because have a custom Jacobian inside it;
GradientDescent(...), NewtonMethod(...) and NCGM(...) need only the cost function, so you can use as input what do you want.

Thank you for the explanation of this. To be honest, I used my test example without to much thinking about the residuals, cost function - tried everything.

Here I tried CD functions with the same example with adding one parameter (there are four parameters now). Unfortunately, all of them failed - see the attached files (*9.sm used just residuals, *9a.used squared residuals, *9b.sm used sum of squared residuals (cost function)). I might make some mistake or wrong choice of perturbation parameters or tolerance. See please the bottom of every file. My "home made" LMA() function did something, reduced the value of cost function and it seems successful here - see the green regions above (the second picture attached). If I did not do anything wrong here I hope that this will also help you to further improve these functions Good

Regards,
Radovan

Edited by user 14 October 2012 10:39:23(UTC)  | Reason: Not specified

File Attachment(s):
NLMinimization-9.sm (110kb) downloaded 40 time(s).
NLMinimization-9a.sm (110kb) downloaded 38 time(s).
NLMinimization-9b.sm (107kb) downloaded 35 time(s).
omorr attached the following image(s):
Primer4.7w-1-Numerical-Jacobian-NRCD.png
NLM-LMA.png
When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
Offline kilele  
#112 Posted : 15 October 2012 02:33:31(UTC)
kilele


Rank: Advanced Member

Groups: Registered
Joined: 30/03/2011(UTC)
Posts: 393

Was thanked: 132 time(s) in 113 post(s)
Hi
Just for your reference,

this is a LM coded in Matlab with three numerical examples to test its numerical robustness.
see "Levenberg-Marquardt method: introduction and examples" and "Levenberg-Marquardt method: m-files"
at http://www.duke.edu/~hpgavin/ce281/

And "Iterative Methods for Optimization" by C.T. Kelley has matlab codes for LM and code for numerical derivatives.
Free code at
http://www.siam.org/book...lley/fr18/matlabcode.php
thanks 1 user thanked kilele for this useful post.
on 15/10/2012(UTC)
Offline Davide Carpi  
#113 Posted : 15 October 2012 20:57:39(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Originally Posted by: mkraska Go to Quoted Post
Hi w3b5urf3r,

thanks again. I played around with the bracketed root finding algorithms based on your Bisection test file. The precision argument is either a number or a two element list, each element being again a number. I guess/propose that in further releases these elements might be lists, giving tolerances for the individual functions (first list) and for the variables (second list). Secant() is complaining about type of arguments.

The flexibility of your functions in terms of how the system is formatted and the mechanism of identifying the unknowns is really impressive and highly welcome.
BTW: How can I remove definitions from the namespace, i.e. make previously defined names unknown?

My surprise with the HRE test was not so much about the high computation time of HRE.NR but in that increasing a non-eaten-up maximum changes the number of required iteration.

The ordinary user dreams of a wrapper function that by some logic uses whatever solver might be appropriate. A brute force approach would be to try all available variants with increasing number of allowed iterations, perhaps with some sort of trace protocol such that could gain some experience by playing around.

Best regards, Martin


Hi Martin,

Actually my idea for target precision inputs it's something like this (a table with a row for each function):
Open in SMath Cloud

About the tracing I've already thinked to use an output *.txt file or directly a 3rd output parameter (a table), I don't know what's the better way...

A bruteforce approach like the Andrey roots() and solve() would be nice (also, f.e. a built-in maximize() and minimize() function) but It's an hard task... actually the closest approach to this goal would appear to be the Draghilev's-method implemented by Uni Good (Unfortunately, it's necessary a Maple copy... Unsure )


regards,

w3b5urf3r

Edited by user 15 October 2012 21:08:12(UTC)  | Reason: Not specified

If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline mkraska  
#114 Posted : 16 October 2012 00:08:50(UTC)
mkraska


Rank: Advanced Member

Groups: Registered
Joined: 15/04/2012(UTC)
Posts: 1,986
Germany

Was thanked: 1124 time(s) in 721 post(s)
Originally Posted by: w3b5urf3r_reloaded Go to Quoted Post

Hi Martin,

Actually my idea for target precision inputs it's something like this (a table with a row for each function):
Open in SMath Cloud

About the tracing I've already thinked to use an output *.txt file or directly a 3rd output parameter (a table), I don't know what's the better way...



Hi w3b5urf3r,

I'd prefer a list or vector of lists as format for target precision, because the number of functions does not necessarily match the number of variables. Also, giving just one value for all functions or variables would then be done by a one element list (system). Of course, this is perhaps matter of taste and as such the cook's decision.

BTW, in the past I have repeatedly applied the evolution strategy, including the version with covariance matrix adaptation. As far as I understand, this algorithm requires eigenvalues and eigenvectors of the covariance matrix to be calculated. The procedure is quite robust against noise, including inconsistent offspring selection. See CMA ES for theory and code examples.

Best regards, Martin

Martin Kraska

Pre-configured portable distribution of SMath Studio: https://smath.com/wiki/SMath_with_Plugins.ashx
thanks 1 user thanked mkraska for this useful post.
on 18/10/2012(UTC)
Offline Davide Carpi  
#115 Posted : 16 October 2012 00:25:00(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Originally Posted by: mkraska Go to Quoted Post
Originally Posted by: w3b5urf3r_reloaded Go to Quoted Post

Hi Martin,

Actually my idea for target precision inputs it's something like this (a table with a row for each function):
Open in SMath Cloud

About the tracing I've already thinked to use an output *.txt file or directly a 3rd output parameter (a table), I don't know what's the better way...



Hi w3b5urf3r,

I'd prefer a list or vector of lists as format for target precision, because the number of functions does not necessarily match the number of variables. Also, giving just one value for all functions or variables would then be done by a one element list (system). Of course, this is perhaps matter of taste and as such the cook's decision.

BTW, in the past I have repeatedly applied the evolution strategy, including the version with covariance matrix adaptation. As far as I understand, this algorithm requires eigenvalues and eigenvectors of the covariance matrix to be calculated. The procedure is quite robust against noise, including inconsistent offspring selection. See CMA ES for theory and code examples.

Best regards, Martin



Ah... yes, I did not write previously but any ε.x# should be a vector/list/matrix if the number of variables of the related function f.x# it's greater than one Good


regards,

w3b5urf3r

Edited by user 30 October 2012 15:32:00(UTC)  | Reason: Not specified

If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline Davide Carpi  
#116 Posted : 30 October 2012 15:41:19(UTC)
Davide Carpi


Rank: Advanced Member

Groups: Registered, Advanced Member
Joined: 13/01/2012(UTC)
Posts: 2,647
Man
Italy
Location: Italy

Was thanked: 1329 time(s) in 875 post(s)
Hi all,

I'm working to improve the convergence criteria; at the time I found two different ways to do this (see the attachments).

I'd like to know which would be preferred in your opinion.


best regards,

w3b5urf3r
File Attachment(s):
NS_options.sm (43kb) downloaded 46 time(s).
Davide Carpi attached the following image(s):
SMath Studio - [NS_options.sm].png
If you like my plugins consider to support SMath Studio buying a plan; to offer me a coffee: paypal.me/dcprojects
Offline omorr  
#117 Posted : 30 October 2012 16:48:54(UTC)
omorr


Rank: Administration

Groups: Registered, Advanced Member
Joined: 23/06/2009(UTC)
Posts: 1,740
Man
Serbia

Was thanked: 318 time(s) in 268 post(s)
Hello w3b5urf3r,
Originally Posted by: w3b5urf3r_reloaded Go to Quoted Post

...I'd like to know which would be preferred in your opinion.

If you can not keep them both at the same time, just use the one which is easier for you to perform and maintain Good

Regards,
Radovan


When Sisyphus climbed to the top of a hill, they said: "Wrong boulder!"
thanks 1 user thanked omorr for this useful post.
on 31/10/2012(UTC)
Offline kilele  
#118 Posted : 30 October 2012 23:32:20(UTC)
kilele


Rank: Advanced Member

Groups: Registered
Joined: 30/03/2011(UTC)
Posts: 393

Was thanked: 132 time(s) in 113 post(s)
Hello
I was reading about Stiff IVPs and recalled I linked a gpl .NET library.
I've just added one more resource with GEAR BDF code on that post:
http://en.smath.info/for...algorithms.aspx#post7971
thanks 2 users thanked kilele for this useful post.
on 31/10/2012(UTC),  on 31/10/2012(UTC)
Offline mkraska  
#119 Posted : 31 October 2012 00:05:46(UTC)
mkraska


Rank: Advanced Member

Groups: Registered
Joined: 15/04/2012(UTC)
Posts: 1,986
Germany

Was thanked: 1124 time(s) in 721 post(s)
Hello w3b5urf3r,

I'd prefer local settings. However, there is the danger of bloating the dynamic assistence and the function menu by multiple functions of the same name with different numbers of arguments. Do you have access to the context menu? Then some of the options might be set there, just like decimal places and exponential thresholds. That should be limited to options that just affect the display or the output verbosity.

All options that affect the result as such should be visible without digging into menus (think of the beloved symbolic/numeric obfuscation by using the same operator for both).

The context menu options could get a global defaults dialog. Is your menu access limited to Insert? I would expect such settings in the tools>Settings menu branch.

Looking forward to an new release of your plugin...

Best regards, Martin
Martin Kraska

Pre-configured portable distribution of SMath Studio: https://smath.com/wiki/SMath_with_Plugins.ashx
thanks 1 user thanked mkraska for this useful post.
on 31/10/2012(UTC)
Offline mkraska  
#120 Posted : 01 February 2013 04:40:59(UTC)
mkraska


Rank: Advanced Member

Groups: Registered
Joined: 15/04/2012(UTC)
Posts: 1,986
Germany

Was thanked: 1124 time(s) in 721 post(s)
Hello w3bsurf3r,

I played around with nonlinear solvers, unfortunately without any success. At least not if units are involved.

Are the solvers supposed to work with units?
- can the unknown variables have units (given via units of limits or start point)?
- need the epsilons have units as well?

do I need to define functions or can I just give expressions as first argument?
File Attachment(s):
Bisection test.sm (10kb) downloaded 46 time(s).
mkraska attached the following image(s):
Bisection test.png
Martin Kraska

Pre-configured portable distribution of SMath Studio: https://smath.com/wiki/SMath_with_Plugins.ashx
Users browsing this topic
Guest (2)
14 Pages«<45678>»
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.