my net house

WAHEGURU….!

Category Archives: datascience and machine learning

After building Models(Paint those!)

Cross Validation: Each Sample is separated into random equal sized sub-samples, Helps to improve model performance.

Different Forms of cross Validation:

  1. Train-Test Split – low variance but more bias
  2. LOOCV(Leave one out Cross validation) – Leave one data point out and apply model on rest of the data. -low bias but high variance,

Now in the above two methods we have limitations related to Bias and variance, So what to do? Let’s fire-up ‘Cross-Validation’!

There are various other important Cross Validation Examples/Methods those are interesting like Time-series_Split, Leave_P_out(LPO), Random_permutation_Split(Shuffle and split), StarifiedKfold,:

Special Case:

Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended to use stratified sampling as implemented in StratifiedKFold and StratifiedShuffleSplit to ensure that relative class frequencies is approximately preserved in each train and validation fold.

 

Everything you should know about Quantitative Finance(Almost Everything ;)

Quant trading is intellectually very satisfying because it draws heavily from fields as diverse as computer science, statistics, math, psychology, economics, business, operations, research, and history.
Quant trading combines programming and trading. Programming is the fastest way to take an idea and turn it into reality (compare it to writing a book, publishing, etc or to architecture a building, getting zoning permits,doing construction, etc). Trading is the most direct approach to making money (compare it with starting the next facebook, programming, hiring, advertising, etc or to creating a retail chain, hiring, waiting for money to flow in to reinvest, etc). Combining programming and trading,  you have the most direct path from an idea in your brain to cash but never forget that all you have to do is minimize Risk.

Quant trading is intellectually very satisfying because it draws heavily from fields as diverse as computer science, statistics, math, psychology, economics, business, operations research, and history.

Trading and Trading Strategies:

There are many words those sound complicated when you read or listen about Trading things for example on of those is Statistical Arbitrage Trading. It looks complicated but in real manner it is another form of Pair’s Trading. Pair’s trading is often considered as selling/buying two inter-related stocks/commodities/currencies/futures. Trading strategy is set of calculations those a Trader/Quant/Person believes that will work-out in near or long future in trading markets. There are various methods of trading like Fully-Automated or Semi-Automated. To write a Trading strategy one does not need to know programming or advanced mathematics but on the other hand one need to understand simple concepts of Statistical methods. On the other hand a degree in Advances degree in Mathematics or similar also does not assure that one can write “Profitable” Trading strategies. I believe it is more the  matter of Past experience in Trading as well.

How Easy it is to earn from Trading:

When we talk about programming languages often one question is asked, ‘How fast it is?’ and always counter question is ‘Define Fast’. I would like to implement same phenomenon here and that is ‘define easy’ or we can sat that Earning from Trading is EASY but we must not consider easy as ‘Instant’. Most of the times people or brokerage firms  are more interested in intra-day trading because big volume is there and that means more buy and sell opportunities.

Borrow Money While Trading:

Good thing about Trading is you have to small amount of Money and you can always get leverage. Leverage is kind you can lend from trading firms for trading so if you will get profit it will also be shared with Firm and on the other hand in the case of loss it will be all yours. Good trading Strategy is considered as one having good returns, better sharp Ratio, small draw-downs, and   uses as low leverage as possible, Suppose you got very good tip from anywhere about one really interesting trading investment that you are pretty sure will work even in that case you should use as less as possible. it is dangerous to over-leverage in pursuit of overnight riches.

How much time is required for self running Quantitative Business:

Time is really dependent on the type of Trading strategies you are using. Some trading strategies or Machine-learning models need to be trained few minutes  before market opens  as well as some models need to be trained after closing the market. Sometimes one has to come up with trading strategies those train models once a week and runs the Buy/Sell calls once a Month.

How Hard it is to Create Trading Strategies:

Creating Trading strategies are as hard as doing calculus and derivative or as easy as finding mean or median from a given set of data. You might talk about various things while creating Trading Strategies but simpler one is think more and more about Quantitative Techniques. Don’t look at the preciseness, perfect or most accurate results you want to achieve but look for the valuable information or quantify those valuable things you need to know about your Trading model.

Importance of Programming in Quantitative Finance:

Developing a Trading Strategy is taste of mind. One can develop various trading strategies by just looking at the things and writing a daily schedule like generate a Buy call at this time and Generate a Sell call at this time and that would also workout but real reason behind all this is you want to create a trading-Bot/Robot that does it automatically so you could observe it as external factor and  look into various things those can be tuned so converting a Trading Strategy into Program/code/algorithm is one of the most important task we should persuade.

A Dollar-neutral Portfolio:

The market value of the long positions equals the market value of the short positions.

Market-Neutral Portfolio:

The beta of the portfolio with respect to a market index is close to zero, where beta measures the ratio between
the expected returns of the portfolio and the expected returns of the market index) require twice the capital or leverage of a long- or short-only portfolio.

 

Directional Strategy – That only does one thing either buy or sell.
Bi-directional Strategy – That does both buy and sell.

 

Survivorship Bias Free Data and it’s importance:

One can use a database as Back-test that is not survivorship Bias free for intra-day trading as well because in most of the cases a good company does not fall in one day and same happens with rise of company.

How holding period does effect you Trading Strategies:

I was doing trading strategies based on various factors, Using machine-learning I was predicting if tomorrow’s close price is higher than today’s then buy otherwise sell the stock. it is great Practice to do but in real we have to understand one thing carefully and that is I am holding a stock for whole day, Anything could go wrong in whole day so what I had to do is shorter the holding period of the stock, That means for less time I will hold that particular stock less chances will be there for anything Going wrong and less Risk. 🙂

Sharp-Ratio(Most important thing):

Sharp Ratio defines how consistent your returns are! Think of sharp-ratio as independent from Benchmark if you want to create Strategy with Good returns and High Sharpe Ratio that means your approximation or understanding of Market should come up with various rules of Generalization. Your Sharpe ration should be more than 1(one), Strategy returns could be less or more than  benchmark returns. Sharp-Ratio is also considered as information-Ratio and formula to calculate sharp-Ratio is as follows:
Information Ratio(Sharpe-Ratio) = Average of Excess Returns/Standard Deviation of Excess Returns
Excess Returns = portfolio returns- benchmark returns
As a rule of thumb, any strategy that has a Sharpe ratio of less than 1 is not suitable as a stand-alone strategy. For a strategy that achieves profitability almost every month, its (annualized) Sharpe
ratio is typically greater than 2. For a strategy that is profitable almost every day, its Sharpe ratio is usually greater than 3.

One important thing you mush be knowing that Benchmarks varies according to the Market/Exchange you are working with.

Sharpe-Ratio Vs Returns:

If the Sharpe ratio is such a nice performance measure across different strategies, you may wonder why it is not quoted more often instead of returns. A higher Sharpe ratio will actually allow you to make more profits in the end, since it allows you to trade at a higher leverage. It is the leveraged return that matters in the end, not the nominal return of a trading strategy.

Why and How Draw-Down is bad:

A strategy suffers a draw-down whenever it has lost money recently.
A draw-down at a given time t is defined as the difference between the current equity value (assuming no redemption or cash infusion) of the portfolio and the global maximum of the equity curve occurring on or before time t.

You must know draw-down of strategy before using it!

Length and depth of your Draw down:

Your draw-down’s length defines how long it would take to recover the market and depth defines how much you can loose, but those results are based on your back-testing in real you have ti understand things in better way because in the real trading strategies Draw-Downs could be very less or more than benchmark results.
What is Slippage— ?

This delay can cause a “slippage,” the difference between the price that triggers the order and the execution price.

How Will Transaction Costs Affect the Strategy?
Transaction cost is brokerage amount or something that costs when you buy/sell any stock. Now as we understand that less the hold period will be more profit you can make or at-least less risk you will be having. One thing you must never forget and that is minimize the risk that is all behind the strategies and Algorithmic Trading. Every time a strategy buys and sells a security, it incurs a transaction cost. The more frequent it trades, the larger the impact of transaction costs will be on the profitability of the strategy. These transaction costs are not just due to commission fees charged by
the broker. There will also be the cost of liquidity—when you buy and sell securities at their market prices, you are paying the bid-ask spread. If you buy and sell securities using limit orders, however,
you avoid the liquidity costs but incur opportunity costs. This is because your limit orders may not be executed, and therefore you may miss out on the potential profits of your trade. Also, when you buy or sell a large chunk of securities, you will not be able to complete the transaction without impacting the prices at which this transaction is done. This effect on the market prices due to your own order is called market impact, and it can contribute to a large part of the total transaction cost when the security is not very liquid.

Why survivorship bias should not be there?

This is especially true if the strategy has a “value” bent; that is, it tends to buy stocks that are cheap. Some stocks were cheap because the companies were going bankrupt shortly. So if your strategy includes only those cases when the stocks were very cheap but eventually survived (and maybe prospered) and neglects those cases where the stocks finally did get de-listed, the backtest performance will, of course, be much better than what a trader would actually have suffered at that time.

Data-Snooping Bias?(Model Over-fitting)
If you build a trading strategy that has 100 parameters, it is very likely that you can optimize those parameters in such a way that the historical performance will look fantastic. It is also very likely that the future performance of this strategy will look nothing like its historical performance and will turn out to be very poor. By having so many parameters, you are probably fitting the model to historical accidents in the past that will not repeat themselves in
future. Actually, this so-called data-snooping bias is very hard to avoid even if you have just one or two parameters (such as entry and exit thresholds).

Important Questions to ask yourself:

1. How much time do you have for baby-sitting your trading programs?
2. How good a programmer are you?
3. How much capital do you have?
4. Is your goal to earn steady monthly income or to strive for a large, long-term capital gain?

Important questions you must ask yourself before using a Trading Strategy:
1. Does it outperform a benchmark?
2. Does it have a high enough Sharpe ratio?
3. Does it have a small enough drawdown and short enough draw-
down duration?
4. Does the backtest suffer from survivorship bias?
5. Does the strategy lose steam in recent years compared to its ear-
lier years?
5. Does the strategy have its own “niche” that protects it from intense  competition from large institutional money managers?

Sharp-Ratio and drop-downs(length and duration):
Quantitative traders use a good variety of performance measures. Which set of numbers to use is sometimes a matter of personal preference, but with ease of comparisons across different strategies and traders in mind, I would argue that the Sharpe ratio and draw-downs are the two most important. Notice that I did not include average annualized returns, the measure most commonly quoted by investors, because if you use this measure, you have to tell people a number
of things about what denominator you use to calculate returns. For example, in a long-short strategy, did you use just one side of capital or both sides in the denominator? Is the return a leveraged one (the
denominator is based on account equity), or is it leveraged (the denominator is based on market value of the portfolio)? If the equity or market value changes daily, do you use a moving average as the denominator, or just the value at the end of each day or each month? Most (but not all) of these problems associated with comparing re-turns can be avoided by quoting Sharpe ratio and draw-down instead as the standard performance measures.

Interesting things about Back-Testing:

Back-testing is the process of creating the historical trades given the historical information available at that time, and then finding out what the subsequent performance of those trades is. This process seems easy given that the trades were made using a computer algorithm in our case, but there are numerous ways in which it can go wrong. Usually, an erroneous back-test would produce a historical performance that is better than what we would have obtained in actual trading. We have already seen how survivorship bias in the data used for back-testing can result in inflated performance.

Importance of Sample Size (How much historical data you need?):
The most basic safeguard against data-snooping bias is to ensure that you have a sufficient amount of back-test data relative to the number of free parameters you want to optimize. As a rule of thumb, let’s assume that the number of data points needed for optimizing your parameters is equal to 252 times the number of free parameters your model has. So, for example, let’s assume you have a daily trading model with three parameters. Then you should
have at least three years’ worth of back-test data with daily prices.

However, if you have a three-parameter trading model that updates positions every minute, then you should have at least 252/390 year, or about seven months, of one-minute back-test data. (Note that if
you have a daily trading model, then even if you have seven months of minute-by-minute data points, effectively you only have about 7 × 21 = 147 data points, far from sufficient for testing a three parameter model.)

Training-Set and Test-Set:

It is very simple concept. You have to split data into two portions one would be training set for your model to learn and other would be Test-Set.

What is Profit cap?:

Profit cap is limit that at what amount you want your strategy to exit. In real achieving a profit cap is ultimate goal for the strategy, A greedy strategy without profit cap and Stop-loss could destroy all of your liquidity.
Parameter-less model:

This is like self sustaining model that does all the stuff by itself, that means you need to make sure that your model  is safe and secure and all the parameters like profit-cap is being calculated by itself.

The advantage of a parameterless trading model is that it minimizes the danger of over-fitting the model to multiple input parameters (the so-called “data-snooping bias”). So the back-test performance should be much closer to the actual forward performance. (Note that parameter optimization does not necessarily mean picking one best set of parameters that give the best back-test performance. Often, it is better to make a trading decision based on some kind of averages over different sets of parameters.)

A Simple understanding of Back-testing:

Backtesting is about conducting a realistic historical simulation of the performance of a strategy. The hope is that the future performance of the strategy will resemble its past performance, though as your investment manager will never tire of telling you, this is by no means guaranteed!
There are many nuts and bolts involved in creating a realistic historical back-test and in reducing the divergence of the future Backtesting.

Things to take care in Back-Testing:

Data: Split/dividend adjustments, noise in daily high/low, and survivorship bias.
Performance measurement: Annualized Sharpe ratio and maximum draw-down.
Look-ahead bias: Using unobtainable future information for past trading decisions.
Data-snooping bias: Using too many parameters to fit historical data, and avoiding it using large enough sample, out-of-sample testing, and sensitivity analysis.
Transaction cost: Impact of transaction costs on performance.
Strategy refinement: Common ways to make small variations on the strategy to optimize performance.

 

Importance of Speed in Algorithmic Trading:

There are various things included when you talk about HFT and speed. It does matter that which part of your Trading algorithm takes much more time for execution. Think of this as 90/10 rule in software development. Optimize that 10% portion of your code that Takes 90% time. If your Trading strategy is written in Python that means it could be slow on various portions so it’s better to use C or C++ for such purpose but on the other hand you can also use Cython which is really fast in the case of development as well as in the case of Execution of code. On the other hand your internet connection should be fast as well so you would be able to get data really fast and make decisions based on that data.

Let’s again talk about importance of Programming:

You can use various available custom Platforms for trading as well as you can also create custom ones that uses various back-tests, Different Trading Exchanges(Using APIs),Different Machine-learning models as well as different Platforms and Programming Languages.
How to Decrease Brokerage Cost using Programming?:
In order to minimize market impact cost, you should limit the size (number of shares) of your orders based on the liquidity of the stock. One common measure of liquidity is the average daily volume (it is your choice what lookback period you want to average over).
As a rule of thumb, each order should not exceed 1 percent of the average daily volume. As an independent trader, you may think that it is not easy to reach this 1 percent threshold, and you would be right when the stock in question is a large-cap stock belonging to the S&P 500. However, you may be surprised by the low liquidity of
some small-cap stocks out there.

 

Paper Trading and Re-creating Strategies:(Testing in Real Market)
The moment you start paper trading you will realize that there is a glaring look-ahead bias in your strategy—there may just be no way you could have obtained some crucial piece of data before you enter an order! If this happens, it is “back to the drawing board.” You should be able run your ATS, execute paper trades, and then compare the paper trades and profit and loss (P&L) with the theoretical ones generated by your backtest program using the latest data. If the difference is not due to transaction costs (including an expected delay in execution for the paper trades), then your software likely has bugs.

Another benefit of paper trading is that it gives you better intuitive understanding of your strategy, including the volatility of its P&L, the typical amount of capital utilized, the number of trades per day, and the various operational difficulties including data issues. Even though you can theoretically check out most of these features
of your strategy in a back-test, one will usually gain intuition only if
one faces them on a daily, ongoing basis. Back-testing also won’t reveal the operational difficulties, such as how fast you can download all the needed data before the market opens each day and how you
can optimize your operational procedures in actual execution.

Psychology and Trading:

This is one of the most important concept you must be knowing. Trading is real money and that real money could make you really mad and in lots of ways. I am again pointing this thing out which is Algorithmic Trading is all about Minimizing your Risk not to get Really rich instantly. Yes you can create Strategies with high Sharpe ratio and sell it to firms like JP Morgan or other big Banks then you will be rich very quickly. 🙂

More or less it’s not just the Trading-Strategy you create but it’s your mind and experiences as well those help you to grow as well as your Capital.

How we can implement RISK-Management Using Programming?:

Calculating the Kelly criterion is relatively simple and relies on two basic components: your trading strategy’s win percentage probability and its win to loss ratio.

The win percentage probability is the probability that a trade will have a positive return. The win to loss ratio is equal to your total trading profits divided by your total trading losses.

These will help you arrive at a number called the Kelly percentage. This gives you a guide to what percentage of your trading account is the maximum amount you should risk on any given trade.

The formula for the Kelly percentage looks like this:

Kelly % = W – [(1 – W) / R]

  • Kelly % = Kelly percentage
  • W = Win percentage probability (total number of winning trades/total number of trades)
  • R = Win to loss ratio (total amount gained by winning trades/total amount lost by losing trades)

References:

Max Dama:http://www.decal.org/file/2945

Quantitative Trading by Ernie P Chan: http://www.amazon.in/Quantitative-Trading-Build-Algorithmic-Business/dp/0470284889

Google Search: http://en.tradimo.com/learn/money-management/kelly-criterion/

Effective Machine Learning for Quantitative Finance

Sometimes we believe that Computer programs are magic, Softwares work like magic in the machine, You click on various buttons on Your PC and Boom! Real interest or real things are really far away from all this stuff. All you have to do is handle large databases, parse those as per requirement and do lots of UI tricks as well as many Exceptional Handling techniques those lead you to a great product or may be the one a real Human can handle! Sometimes it feel that it is also True in the case of Machine-Learning and Data-Science. Selecting some predicates and just applying most popular model might not be enough because we are developing a model and learning with it, So it makes sense to grow with it and get your models grow as well.

Let’s do some discussions with ourselves and learn how Good models could lead to better Trading-Strategies.

Why we use ML models?:

ML Models are used because it saves us to write 100’s or may be 1000’s of lines of code.That is mathematics, one equation can save knowledge of worth hundreds of pages in it. (Remember Binomial Expressions for calculation of Probability? )

Machine learning solutions are cost effective and it takes really less amount of time to implement those models rather than designing an expert system which takes large amount of time and might come with various error pron results on the other hand a simple ML model can be built in few hours(A bit complicated and little bit smart) and years can be used to implement and make it more reliable towards decision making process.

How Machine learning models can learn more Effectively?:

When we are really into model building process there is one thing we might get confused and that is which model to use, really I guess that is really a question one should look for in real. There are not just hundreds but thousands of algorithms one can look for and each year those algorithms get developed more and more with different set of features but here real thing again comes out as intensive question that really which model to use?

Model you want to use is really dependent on your requirement and type of features you believe could be helpful to predict the outcomes. While choosing any algorithm one should take care of three things: Representation, Evaluation, Optimization. Representation describes the features one need to pick for predictions and your algorithm must be able to handle that feature space. This space is also called hypothesis space.

Evaluation plays an important role as well. When you are figuring out which Algorithm you should use or which one can come up with better results, All you need to do is use some kind of Evaluation function for your algorithm/model/classifier and such Evaluation function could be internal as well as external, It all depends on your choice of Algorithm/Model.

Optimization is one another thing that really plays very important role in each aspect of building a model. There are various factors we consider such like ‘Predictability score of model’, ‘Learning rate’, ‘Your model’s memory consumption’ and so on. And there are various factors those are included while doing optimization parts which are something like how we handle all the search criteria.

How Magically Machine-Learning Algorithms do predictions?

One thing we have to understand carefully and that is Approximation/Generalization. My most favorite line about mathematics is : ‘Mathematics for Engineers is all about Finding an approximation Solution and That approximation solution is just enough to put rover on Mars’. An approximation solution leads almost close to various predictions. Splitting your data into train and test set is the best practice and one should never underestimate train test because based on these results of train and test set we are able to initiate the process of optimization. After trying various optimizations on our model which are like choosing the type of search either we are going to use ‘Beam Search’ or ‘Greedy Search’ or how test score improves. Notice that generalization being the goal has an interesting consequence for machine learning. Unlike in most other optimization problems, we don’t have access to the function we want to optimize!  So all your Machine learning model does is just finds a common relation between various predicates by experimenting thousands/millions/billions times on your Training data and that experimentation process to find common patterns those may or may not be true for future-use are really based on such Experimentations/Hypothesis/Generalization.

Why Machine learning and Predictions are called BIG-Data?:

Let’s take an example that will explain how much data is good data. Suppose you want to apply for a job so chance of getting a job will be more if you will be able to apply for various positions. So more data is better than clever algorithm because each time machine will be having various cases to understand the classifications but on the other hand BIG data also causes the usage for heavy computing power and Programs like Apache-Spark or Hadoop are industry standard Examples those can be used to process and understand big data in understandable form with fastest way possible.

How many models/Algorithms I must learn?

There would be very simple answer and that is learn as many as you can and apply as many you can. Ensemble learning is known as very popular technique where multiple Algorithms can be used to achieve better results for example in Random-Forest A set of Decision-Trees is being used that means each model/sub-model or each decision tree has given a particular amount of weight and after various trainings random best weights of different Decision trees are being used to predict the outcomes. To understand the process of Ensemble learning one has to look into three things those are most important: 1. Bagging 2. Boosting and 3. Stacking

Bagging: we simply generate random variations of the training set by re-sampling, learn a classifier on each, and combine the results by voting. This works because it greatly reduces variance while only slightly increasing bias.

Boosting: In boosting, training examples have weights, and these are varied so that each new classifier focuses on the examples the previous ones tended to get wrong.

Stacking: Stacking, the outputs of individual classifiers become the inputs of a “higher-level” learner that figures out how best to combine them.

Combining it all becomes: Split training sets into random variations, Apply algorithm on each variance, give a rank(highest – lowest) to each[Bagging] , Now the algorithms/Models having lowest ranks will be improved(How improved?-look for optimization section) [Boosting], Now we have each individual model with weights/ranks and outputs as each have learned from different set of variances Let’s Stack them all – In stacking you have just to suppose that for higher level Model each individual lower level model acts as training set.  Here we could have some biasness regarding the sub-models having greater scores those we assigned in Bagging but higher level Model/Algorithm also considers the decisions of Low-Ranked individual model so that reduces the effect of Biasness.

Which are better Models? – Simple or Complicated?:

When it comes to modeling a model or developing a model always it is assumed that simple models are performed better and generate less errors, Yes that is true in various cases but one must not think biased and that is how we profile our models. Simple models have less error and mush faster than complicated one but that does not imply all the time that simple models should be developed for each problem and one should never approach complicated models. Complicated models exist because those also work and provide good results as well Ensemble learning is good approach that has made this concept completely valid.

How to know that which model/Algorithm should apply?: 

Few weeks ago I read an article about how one can come up with the solutions/results/models/algorithms those can solve all the problems of the world- and for sure answer is AI(Artificial intelligence) But are we really there yet? there will be very long debate on this section but real truth is or at-least I believe could be is that one must present or visualize the relationships between two or more variables of data so one could be able to understand that if the relationship is linear or non-liner  Or moreover one also need to understand how such relations effect the actual outcome of the predictor. For example there could be perfect linear relationship between close prices of two stocks and Linear-Regression could be better apply-able for that than Logistic or any other. This thing simply conveys that all models don’t solve all the problems so difficult thing could be how to find out which model will be able to solve which problem in better way. For that a top overview of many models must be required.

If it is correlated that means it will always be true/predictable:

That is really a good assumption but what would be the need of Data-Scientist  if it would be true all the time? Correlation does not imply causation and that means it must not be the only reason to construct a model, When building model there are various Hypothesis As a Data-Scientist need to propose and come with the ones those which come out as real relationship or better predictions when applied on actual life. In stocks we can come up with correlation that news directly effect the stock prices and constructing a strategy that uses NLP(Natural Language Processing) to generate buy/sell calls we might have come up with Negative returns and investors might lost their money. because There might be a correlation between news and stock prices but that does not mean it is the only factor that must be considered to build a model that runs trading strategy based on NLP, so at the end we have various things to consider like how much Stock-volume is present in the company, What are the rankings of stock in SP500 or how much money is available in books of company or how that particular stock is performing in the past decades of time.

Some Final thoughts:

When you think about ML models and how those can create happiness in your life, Here happiness means how you can come up with models those generate best predictions and always come up with great predictions there is only one thing to remember which is ‘Rule of Evolution‘. You can sit on one technology/Design for rest of your life OR you can grow! Thinking about Machine-Learning or building models those are related to Machine-learning is Learning. Machine is desperate for learning and learning is only possible by doing lots of stuff(Experimentation) getting good results keeping those models for further improvements and neglecting those are not Good enough to use but also give those a try may be sooner or later.   

Preprocessing your Data and Why?

Preprocessing plays very important role in Data Science. When are going to feed data to Algorithm there are various things to take care of like normalization, label-encoding, scaling and so on.

A really manual method to remove outliers from your data is as follows:

**Your data should be in between median-2*STD and median+2*STD

STD = Standard Deviation

In the above formula we are considering median as measurement of central tendency by just assuming that it is better than mean but in real life it may not be accurate formula to work on.

In this post we will discuss about various data preprocessing Techniques used in Data-science and Machine-learning those may help to increase the prediction rate, Sometimes you can live without preprocessing but sometimes it is good to do Preprocessing on your data. This ‘Sometime’ depends on your understanding of work you do.

data-preprocess-4-638

The real Health of your Data describes the real wealth of your Model, That means your data is the most important part that is the main reason it always take much more time like 70-80% while preparing your data for better use, well not really 70-80% If you are Python Ninja 😀 🙂

Let’s Understand Data-Preprocessing in Richard man’s Style.(What is Richard Man’s Style?)

Hack the source!!

Data Scaling or Standardization:

standard

Sigma represents Standard-Deviation and Mu represents Mean.

It is always great to have less biased data with low variance(Spread-out) but Why?

Think of activation function in neural network while performing Forward-propagation, What our activation function does is convert each input between the range of Zero to One(0 to 1) so that would minimize/scale the range of data but in other Algorithms like Regression or Classification we don’t have that automatic-Scaling facility so what we do is apply manual scaling methods.(Too bad I have to write one more function before training my Data :D) One thing also we should remember that ‘Your neural network will be much faster if you feed it with normal data ‘

So decreasing the spread-Out/variance we can achieve better predications because it is easy for system/Algorithm to find patterns into Smaller area. here is small portion of wikkipedia article about feature scaling you might find interesting:

For example, the majority of classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance.    

from sklearn import preprocessing
import numpy as np
frag_mented_array = np.array([2,78,12,-25,22,89])
defrag_array = preprocessing.scale(frag_mented_array)

/home/metal-machine/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py:429: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.
wwarnings.warn(msg, _DataConversionWarning)


defrag_array
array([-0.67832556, 1.18502658, -0.43314765, -1.34030593, -0.18796973,
1.45472229])

**Please never underestimate that warning in real life. Try to look into advance-section of numpy array that will teach you how you can assign ‘types’ to Numpy arrays!

Now the point arises should I scale my Training data or Testing data as well, Well answer is Both. Look out for StandardScaler class in Scikit_learn:

http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler

There are some other useful features as well:
MinMaxScaler – Scale data between range (0,1)
MaxAbsScaler – Scale data between range (-1,1)

One question here arises that when I should use Normal Standardization or MinMaxScaler or MaxAbsScalar?

Answer could be really tricky or ‘not in the plain form’, It just depend on the Requirements or your model/ML-Algorithm that you are going to apply on your your data.

We can also understand it this way as well that how decreasing ‘Euclidean distance’ does effect the performance of your model or not.

 


Normalization:

normal
Normalization is the process of finding unit normals for individual groups,
Remember one thing carefully: Scaled data will always have zero mean and Unit Variance, achieving that is the real cause behind scaling or standardization. Scaling is kind of personal choice that how and at what limits you want to scale your data but when it comes to Normalization you have to figure-out an external standard.

Normalizing can either mean applying a transformation so that you transformed data is roughly normally distributed. Normalizing in scikit-learn refers to rescaling each observation (row) to have a length of 1 (called a unit norm in linear algebra).

Binarization:

It is really simple and effective process to think of. It converts your data into binary for that is 0 and 1. which means boolean values, In the algorithms those are used in Stock’s-Trading are heavily based on binarization where we use 0 if stock’s predicted price will go down and 1 if price will go up or vice-versa. but important thing to note down is how Binarization is important to make final decisions as well for Algorithm to predict values in the form of 0 and 1.

 


Removing Biasness from Your Data:

Biased data leads your model towards one form of real universe and your model will only understand that biasness in your data and learn/make predictions only based on that biasness(That is the reason we must use Hypothesis-Testing and Cross-Validation before running our model into production) One thing that comes on my mind as an example of biasness is as follows:

Suppose you picked Apple Price stock from period 01-01-14 ti 01-01-15 and let’s assume in that period Apple’s Stock were going up Everyday/Every-Month/Every-Quarter so after training your model with that particular time period data your model will predict apple’s future price as higher than present price because that became nature of model after learning from Biased data.

This is just an Stupid example to tell readers that how Biased does effect your data.


Survivorship Biased data:

A historical database of stock prices that does not include stocks that have disappeared due to bankruptcies, de-listings, mergers, or acquisitions suffer from the so-called survivorship bias, because only “survivors” of those often unpleasant events remain in the database. (The same term can be applied to mutual fund or hedge fund databases that do not include funds that went out of business.)


Backtesting a strategy using data with survivorship bias can be dan-
gerous because it may inflate the historical performance of the strat-
egy.

Disclaimer: This is not full proof post about data preprocessing there are so many other things to know like PCA(Principle component analysis) or Gradient Descent for more understanding of Machine-learning operations while applying on your data.

Important Julia Packages

  1. Julia Ipython

Julia is able to run very well on you Ipython notebook Environment. After all, All you have to do is Data-Science and Machine-Learning. 🙂

julia

1.1 Open Julia Prompt(At Ubuntu it works like typing ‘julia’ command in your Terminal)

1.2 run command > Pkg.add(“IJulia”) # it will do almost all the work.

2. DataFrames: Whenever you have to read lot of files in Excel-Style Julia DataFrames Package is good to go.

Pkg.add("DataFrames")

3. Arduino:

A Julia Package for interacting with Arduino.

https://github.com/rennis250/Arduino.jl

4. Neural Network Implementation of Julia

https://github.com/compressed/BackpropNeuralNet.jl

5. Visualizing and Plotting in Julia:

https://github.com/bokeh/Bokeh.jl

6. Reading and writing CSV files in Julia

https://github.com/JuliaData/CSV.jl

7. DataClusting in Julia:

https://github.com/JuliaStats/Clustering.jl

For more Large number of Packages, Please refer following link:

http://pkg.julialang.org/

Note*: You can also run most of the Shell commands in Julia environment as well. 🙂

Let’s have fun with Correlation:-O

Ok have some fun first. 😀

Whenever you will read any post or paper related to Machine-Learning or Data-Science you will get word ‘Correlation’ many times and how it’s value is important in your model Building.

A simple definition of Correlation: A mutual relationship or connection between two or more things. (that’s layman’s definition and It should be enough most of the times 😉 )

Coefficient of Correlation  is just an integer, From which we understand how two or more things

are related to each-other. As we discussed Coefficient of Correlation is an integer so it could be +ve or -ve and value of correlation decides how two data-sets effect each other.

Following two images tell lot about Correlation and it’s Value.

Coefficient of Correlation between range -0.5 to +0.5 is not that valuable by why and how we calculate correlation?

What is Covariance ?

Now if you still feel that something is really missing we should talk about Variance:

Let’s Remove Co from Covariance.

Variance is Measurement of randomness. So How you would calculate Variance of Data?

Give me data:

Data = [4,5,6,7,12,20]

I will find means and subtract it from each individual- Isn’t that Mean-Deviation ? 😀 OMG!

Have a look at the following Picture:

berk-chapter-11-optimal-portfolio-choice-capm-22-728

Let’s wait for stuff like:

Coefficient of determination, Probable Error and interpretation.

Hacker’s Guide to Quantitative Trading(Quantopian Python) Part 2

Quantopain Provides required API functions,Data,Helpful-community as well as batteries included Web-based Dashboard to play with Algorithmic-Trading, Create Your own trading Strategies, and launch your Trading model in live Market.

Here I will only talk about code and how it should be written to create your own Trading Strategy.

There are basically Two methods.

initialize() and handle_data()

initialize act as initializer for various variables. same as __init__ method in Python.

Now what kind of variables we have to declare in initialize() function is dependent on your strategy. we can select limited number of stocks,days,type of trading,variables required for Algorithms.

A very simple example of initialize() code could look like as follows:

def initialize(context): # consider context just as 'self' in Python

   context.stocks = [sid(24),sid(46632)] # sid stands for stock_id

initialize() also contains the stuff that can be used many times or all the times in our Trading Algorithm:

1. A counter that keeps track of how many minutes in the current day we’ve got.

2. A counter that keeps track of our current date.

3. A list that stores the securities that we want to use in our algorithm.

Whatever variables that you define here will remain persistent (meaning that they’ll exist) but will be mutable. So that means that if you initialize context.count as 0 in initialize, you can always change it later in handle_data().

A Simple Example of handle_data():

def handle_data(context,data):

   for stock in context.stocks:

        if stock in data:

            order(stock,1)

Momentum Strategy:(Common Trading Strategy)

In this strategy we consider Moving average price of stock as an important factor to make decision to put a security price in Long or Short.

Here is simple explanation of momentum Strategy:

● If the current price is greater than the moving average, long the security

● If the current price is less than the moving average, short the security

Now we will use Quantopian API to implement this strategy for Trading. instead, our algorithm here is going to be a little more sophisticated. We’re going to look at two moving averages: the 50 day moving average and the 200 day moving average.

David Edwards writes that “the idea is that stocks with similar 50 & 200 day moving averages are more likely to be fairly valued and the algorithm will avoid some of the wild swings that plague momentum strategies. The 50/200 day crossover is also a very common signal, so stocks might be more likely to continue in the direction of the 50day MA because a lot of investors enter and exit positions at that threshold.”

The decision-making behind Moving-average is as follows:

● If the 50 day moving averages is greater than the 200 day moving average, long the security/stock.

● If the 50 day moving average is less than the 200 day moving average, short the security/stock

So now Let’s make a Trading Bot!

1. First we have to create our initialize() function:

def initialize(context):

   set_universe(universe.DollarVolumeUniverse(floor_percentile=99.5,ceiling_percentile=100))

”’Set universe is inbuilt function by Quantopian which provide us the stocks with-in required universe. Here we have selected stocks those we have DollarVolumeUniverse with 99.5% and 100% as our floor and ceiling. This means that we’ll be selecting the top 99.5 ~ 100% stocks of our universe with the highest dollar*volume scores.

Please read the comments in the code.

   context.stocks_to_long = 5

   context.stocks_to_short = 5
   context.rebalance_date = None # we will get today's date then we will keep positions active for 10 days here

   context.rebalance_days = 10 # it is just an assumption now for 10 days or finer value


Now we have defined required __init__ parameters in initiliaze() let’s move to

handle_data()

def handle_data():

   if context.rebalance_date!=None: # if rebalnce date is not null then set next_date for changing the position of algorithm

       next_date = context.rebalance_date + timedelta(days=context.rebalnce_days) # next_date should be that days away from rebalnce_date

   if context.rebalance_date==None or next_date==get_datetime(): # if today is that day after 10 days when we market long/short for out stock

       context.rebalnce_date = get_datetime() # set rebalnce_date for today so next_date will be set to again 10 days ahead from rebalnce_date

       historical_data = history(200,'1d',price)

Get historical data of all stocks initilized in initiliaze() function, ‘1d’= 1 day,200=days,’price’=we are only fetching price details because that is only required for our strategy, may be for some strategy volume of stock could be more beneficial

  past_50days_mean = historical_data.tail(50).mean()

  past_200days_mean = historical_data.mean()

  diff = past_50days_mean/past_200days_mean-1

# if diff>0 we will long if diff<1 we will short

   buys = diff[diff>0]

   sells = diff[diff<0]   

# here we will get list of securities/stocks whose moving average will be

# greater as well as less than 0

   buys.sort() # sorting buys list why? - getting top securities from top- more is better
   sells.sort(ascending=False) # reverse sorting sells list - getting top seurities from bottom, less is better because we are selling agiainst market
   buys = buys.iloc[:buy_length] if buy_weight !=0 else None # buy_length = number of securities we want to purchase , 
   sells = sells.iloc[:short_length] if short_weight !=0 else None # short_length = number of securities we want to short

Now here we have buys and sells are two lists!! (remember carefully) all the decisions are going to be made based on these two lists

We can also implement risk factors in out Trading Strategy. Let’s implement minimum form of Risk-Factor, 0.02% of last_traded_price that means if security is going to much lower than that then we will exit.

We will go through each security in our data/universe and those who will satisfy condition of ‘buys’ and ‘sells’ list will be bought/sold.

# if security exists in our sells data

   for sym in data:

       if sells is not None and sym in sells.index:

           log.info('SHORT:%s'%sym.symbol)

           order_target_price(sym,short_weight.stop_price=data[sym].price_stops[sym])

   

# here stop_price is the price of security in real-time+change happend in stops

# order_target_price is inbuilt function.




   # if security exists in our buy data

   elif buys is not None and sym in buys.index:

       log.info('Long:%s'%sym.symbol)

       order_target_percent(sym,buy_weight,stop_price=data[sym].price-stops[sym])

   else:

       order_target(sym,0)


The `order_target_percent` method allows you to order a % target of your portfolio in that security. So this means that if 0% of your total portfolio belongs in AAPL and you order 30%, it will order 30%. But if you had 25% already and you tried ordering 30%, it will order 5%.

You can order using three different special order methods if you don’t want a normal market order:

#`stop_price`: Creates a stop order

#`limit_price`: Creates a limit order

#`StopLimitOrder`: Creates a stop­limit order





How Trading Differentiates from Gambling:

Most of times when you find that you are able to get good returns from your capital you try to beat the market, Beating the market means most of the traders tried to earn much more than fine earnings are being returned by the market for your stock, Such beating the market process can be done by various actions like reversing the momentum or looking for bad happenings in the market(which is also called finding the shit!)Some people are really good at this kung-fu but as you are just budding trader and you have only limited money of yours, So here one important thing should be remembered, “”Protect your capital””. – That’s what most of the Big banks do and if they will hire you as their Quant or Trading-Execution person they will expect same from you. Big banks have billions of dollars that they don’t want to loose but definitely want to used that money to get good returns from market.

So they follow one simple rule for most of the times.

Guaranteed returns even if those are low.

[Make sure returns should be positive after subtracting various costs like brokerage,leverage etc, Because getting positive returns by neglecting market costs is far easy but such strategies should not be used with real money.]

So the real key is think like a computer programmer at first place, something like it should work at first place, so first thing to make sure is getting returns even low but stable returns by calculating various risk-factors.

I am quoting some of informative things from SentDex Tutorial:

Most individual traders are trading on account sizes of somewhere between maybe $25,000 and $100,000 USD, so their motives are to hopefully increase that account size as much as possible, so this person is more likely to take part in High Risk High Yield (HRHY).

Most people who use HRHY strategies, tend to ignore the HR (High Risk) part, focusing on the HY (High Yield).

The same is common with gamblers,even over astronomical odds with things like the lottery.

In other words, always ask yourself – what’s about the market that makes my strategy work? Because, at the end of the day, algorithmic trading is more about trading than about algorithm.

Hacker’s Guide to Quantitative Trading(Algorithmic Trading) Part 1

Quant-ita-tive Analytics:

Word Quant is derived from Quantitative Analytics. In present system finacial market is very heterogeneous in nature,  With coming of many private banks in financial sector
now most the private banks also invest their account holder’s fund into the stock market as a safe trading,  with very low but almost guaranteed returns(money they earn from trading
There was an exception of this ‘safe investment’ term used by banks when whole economy crashed and  millions of people suicided,lost their homes,lost their jobs,lost their lives
as well as most the small countries like in Europe like greece,____,____, were almost finished.

Main reason of this whole disaster happened in digital age was  due to ‘not looking into data properly’.

Now we can also use term Quantitative Analytics is process where a
person/Mathematician/Expert/Data Scientist/Computer-Programmer(with domain specific knowledge)  parses,analyses and develops results from MBs , GBs or sometimes TBs of data to produce results those results are based on history of trading.  Such results helps BIG banks or big investors in the Financial market to build trading Strategies with the target to gain maximum profit from trading equities or at-lest to play safely with their money which results low but assured returns.

Statistical science Plays an important role in study of Quantitative Analytics:

Statistical Science deals with every aspect of our life. Before going further into Statistical science we have to understand the meaning of ‘Statistics’.
According to Dictionary of Statistical terms:

“”Numerical data relating to an aggregate of individuals or the science of
collecting,analyzing and interpretation such data.””

There are various Statistical methods those are used to analyze the data which is Quantitative(HUGE with number of variables or  SMALL with number of variables and Quant’s job is to analyze how those variables are correlated) in nature for example: using grouping,measures of location:average-mean,median,mode,,
measures of spread-range,variance and standard deviation,skew,identifying patterns, univariate and multivariate analysis, comparing groups,choosing right test for data.

Word Quant comes from:
Word Quant come from the the person who use Mathematical formulas and machine learning techniques to predict output of stock market. There are various other profiles as well,  Like Quant Developer- Trading strategy written in steps(not programming) is provided to programmer with domain specific  knowledge of financial system.
it is job of Quant developer to convert Trading Strategy into  Live Trading System(System that buy or sells stocks,options,futures to get profit from market).

Mathematician and Quant:
The relationship between Quant and Mathematician is quite close or in some sense
it can be stated that Quant is the person having fun in real life with Mathematics. Quant calculate  various factors while implementation an algorithm/equation in real-time trading system. with showing results to other people in the form of  graphs rather than complicated equations of significances.

So in some sense Quant uses serious Statistics to get sense out of data and tell  people why his/her trading strategy is better to produce profit from the financial market.  As a Quant developer only job is to write code for equal
Algorithms those are being used but as a Quantitative Analytics we can assume one has to have  various skills of statistics which help to make sense out of Stock-Trading-Data.

Process(!steps) of doing Quantitative analytics:

Take Care of Data: At first as a analyst you should not get yourself lost in the data.

Frequency Distribution: Find frequencies of occurring values against one variable.
Histograms are best for Frequency Analysis.

Find Descriptive Statistics: We can use Central tendencies(mean,median,mode) and
dispersion(range,standard deviation,variance) to learn more about data.

Comparing means:Need to perform various tests on your Data like T-Tests.

Cross Tabulation: Cross tabulation tells what are the relations among different variables.

Correlations: Find relationship between two variables

** Never mix Correlation and Cross Tabulation between your thoughts.

Trading Strategy/Algorithm:
When Traders buy/sell stock/options/future in trading market based on various calculations to make decisions, Combination of such  decisions is called Trading Strategy. Strategy can be built applied without using any Programming Language. In Algorithmic trading A Quant+developer come up with self-running computer program(Trading Strategy/Quantitative model) that is built based upon various trading decisions to automate
whole process of buying/selling stocks.

Tips and Tricks while building Quantitative models:

* Important things about Your Trading model is it should be good on Back-testing(Make sure back-testers will be different for stocks as well as for futures) but as well as it should be as good on forward testing(paper trading).

* You need Separate Model for Separate Trading. Trading model(Strategy) working at Bitcoin Market will not be beneficial for Stocks.
* You should run Strategy as an experimental Project for various Days to get data from results, Read the data and refine the Strategy.

* Every Strategy is sort of sensitive to various risk factors.

* Think about **risks** , Think about Eliminations.
* Run Multiple models at same time, Some will loose some will win.

* One strategy is not enough , Strategies loose it’s Efficiencies after certain period of time.

* Back-testing is not always true. Never try to create model that matches 100% to your back-testing because that would be over-fitting rather than try to create generalized/simple models which will be more effective to predict abrupt changes in the live-trading.

* What actually happens is strategies are catching strategies.

* Right now every Strategy need a human to operate.

* If we know what kind of shields we have given our model we will get to know that either such kind of things those are coming in-News can effect
our strategy.

* Only Human is not well enough to do trading by it’s own. We must use trading strategies to come out as great trading with profitable returns.
* Diversification of Strategies are as important as required.

* Momentum trade’s behave is somewhat in loss and sometimes very ”good” Profit, Because on Momentum we try to find what’s hot in market and that hot could go so high as well as so low.

* A trading model may not take consideration of Earth-Quake, some kind of Govt. fall into any country but humans can.

* Now using the Sentiment Data analysis tools we can incorporate news while building a strategy but for most of the backtesters does not contain that news data to check performance of strategy. So if you say news data record will increase the chances of working that might not be true all the time.

* Sometimes using a news data as input can cause overwrite of entire model because ‘news are not true always’ 🙂

* Keep reading new Ideas on Blog Posts, Look for interesting Books.

* Learn how to interpret the live market.

* As in the programming we say it mostly does not work at the first time, that is also true with trading strategies. 🙂

* Back-test is at-least good not to give -ve results, Highest the Sharp ratio great is the significance of strategy.

* Data you use to build your own strategy is equally important as important as the factors you are considering to build your model.

* You could come to situations where you feel either you need to stop usage of perticular strategy or modify it.
* Back-test is null Hypothesis.We assume that at this specific case our model is accurate.
* Always concentrate on Sharp Ratios.

* Quantitative Trading is suitable for technically anybody. Quantitative Trading could be slow or Fast, One does not need to be a Math Ninja or Computer Wizard to do Quantitative Trading.

* It is always good to start with Simple Strategies present in Public Domains, implement those, run tests, tune,Play and get rich, 🙂

* It’s better if you go with Open-Source Languages because those can very easily turn into live trading systems.(Python or C++)
* Choosing a standard Language is always a great idea because wide range of library support is there to build things! 🙂 😀 * (Python)

Top view of DataScience and machine learning

As a Data Scientist you have or you should not be limited yourself with only the training you got around, you have to think far and more ahead in terms of various things around, think of yourself as an amazing or great thinker like how many variables are possible to run the system or how many variables can really effect the system at which level.

Let’s Look at top three Questions:

What is a Data Scientist?

Why does it appear to be such a hard job?

What are the Skill Sets a Data Scientist Need?

First thing we have to remember or taken care is big data. Organizations have lot of data those are collecting from various sources(mostly clients and activities) but that data is HUGE and no idea how to manage that in particular order. A Data Scientist’s job is to find meaning in that data.

What kind of skills one should look for dataScientist?

For a Data Scientist’s job one need to hire a PhD,Mathematician or Statistician for the job or one can also grow a Data Scientist with-in the organization.

What is the fundamental Job of Data Scientist?

Data Scientist is the one who found new discoveries from the data.

That’s what Scientist do. As a scientist first make a hypothesis and then try to investigate that Hypothesis under various conditions now in the case of Data Scientist they just do it with Data!

Data Scientist look for meaning,Knowlede in data and they do that in couple of different ways.

Visualization of Data:

For example one is Data-Visualization. Data Scientist visualize data in various forms and look for the meaning in the data, That’s what we can say business intelligence or Data Analyst might do.

Using Algorithms:

Advanced Algorithms those actually run through the data by looking all the meaning. Such algorithms are like Machine-learning Algorithms, Neural-Network, SVM, Regression Algorithm or K-means. There are dozens of algorithms and those run through data Looking for the meaning that is one of the fundamental tool of Data Scientist.

So to use those Algorithms Data Scientist must have knowledge of Mathematics,Statistics and Computer Science.

So how Data Scientist’s work is being Started or Done?

A data Scientist is given a large Data-Set or may be small Data-Set with a question.

Something like what customers are likely to return?

Or What customers are likely to buy in weekends?

or How many families buy sweets/fruits on festivals.(You can find the income range of families)-that is classic statics problem.

Now it is Data Scientist job to run various Algorithms on data and look for the answer. Here is simple thing one must think about, It is like how or why specific algorithm would work out on data. If we have basic or general level knowledge of algorithms then we can identify what this algorithm would really answer such question.

“”””””””So Data Scientist go through various algorithms until they can find some pattern in data to answer the questions””””””

Same thing is applicable with any trading strategy, we have to look for in our research that’s why such specific algorithm would work out Or other question is that how one Data Scientist can improve the recommendations of recommendation engine.

Netflix came with competition that Netflix would pay million dollars to one who would just improve their Recommendation Algorithm by 10%.

Five Data Scientist actually came up with that Algorithm that would do that.

So again we can say that Data Scientists are people who answer questions and they are using data to answer those questions Or they are using the combination of Data or algorithms to answer those questions.

When you have large dataset with various categories and columns then you have to rely on various algorithms so fundamanetal knowledge of such algorithms is what DataScientist should be aware of.

What a Data Scientist is not?

There are various myths about Data Scientist, is not a Java Programmer who knows Hadoop, many people are billing themselves as Data Scientists as they have such technical skills they are not data Scientist unless they don’t know data-discovery techniques and how algorithms would work on that data!!

What is the difference between a A data Scientist and Data Analyst?

Now we should also not confuse a Data-Analyts or busniess analyst with Data Scientist, Data Analyst is the one who create reports,graphs,dashboards based on data. Those reports based on their own knowledge that what they think is “important” to show or consider .

Data Scientist is the one who Hypothesis what is important and then try to prove that Hypothesis.

Now It is great for one person to have both of skills for programming as well as for Business domain knowledge. but most important is fundamental knowledge of ‘Algorithms,mathematics and statistics’. That is the one reason it is bit difficult to find a A Data Scientist because it needs some unique Skills.

%d bloggers like this: