Financial analyst

A financial analyst, securities analyst, research analyst, equity analyst, investment analyst, or rating analyst is a person who performs financial analysis for external or internal financial clients as a core part of the job.

Analysts are generally divided into ‘buy-side’ and ‘sell-side’. A buy-side analyst, such as a fund manager, works for a company which buys and holds stocks itself, on the analyst’s recommendation. A sell-side analyst’s work is not used by its employer to invest directly, rather it is sold either for money or for other benefits by the employer to buy-side organisations.

Sell-side research is often used as ‘soft money’ rather than sold directly, for example provided to preferred clients in return for business. It is sometimes used to promote the companies being researched when the sell-side has some other interest in them, as a form of marketing, which can lead to conflicts of interest. The buy-side is sometimes considered more prestigious, professional, and scholarly, while the sell-side may be higher-paid and more like a sales and marketing role. It is common to begin careers on the sell-side at large banks then move to the buy-side at a fund.

Writing by reports or notes expressing opinions is always a part of “sell-side” (brokerage) analyst job and is often not required for “buy-side” (investment firms) analysts. Traditionally, analysts use fundamental analysis principles but technical chart analysis and tactical evaluation of the market environment are also routine. Often at the end of the assessment of analyzed securities, an analyst would provide a rating recommending an investment action, e.g. to buy, sell, or hold the security.

Analysts obtain information by studying public records and filings by the company, as well as by participating in public conference calls where they can ask direct questions to the management. Additional information can be also received in small group or one-on-one meetings with senior members of management teams. However, in many markets such information gathering became difficult and potentially illegal due to legislative changes brought upon by corporate scandals in the early 2000s. One example is Regulation FD (Fair Disclosure) in the United States. Many other developed countries also adopted similar rules.

Financial analysts are often employed by mutual and pension funds, hedge funds, securities firms, banks, investment banks, insurance companies, and other businesses, helping these companies or their clients make investment decisions. Financial analysts employed in commercial lending perform “balance sheet analysis,” examining the audited financial statements and corollary data in order to assess lending risks. In a stock brokerage house or in an investment bank, they read company financial statements and analyze commodity prices, sales, costs, expenses, and tax rates in order to determine a company’s value and project future earnings. In any of these various institutions, the analyst often meets with company officials to gain a better insight into a company’s prospects and to determine the company’s managerial effectiveness. Usually, financial analysts study an entire industry, assessing current trends in business practices, products, and industry competition. They must keep abreast of new regulations or policies that may affect the industry, as well as monitor the economy to determine its effect on earnings.

Financial analysts use spreadsheet and statistical software packages to analyze financial data, spot trends, and develop forecasts; see Financial modeling. On the basis of their results, they write reports and make presentations, usually making recommendations to buy or sell a particular investment or security. Senior analysts may actually make the decision to buy or sell for the company or client if they are the ones responsible for managing the assets. Other analysts use the data to measure the financial risks associated with making a particular investment decision.

Financial analysts in investment banking departments of securities or banking firms often work in teams, analyzing the future prospects of companies that want to sell shares to the public for the first time. They also ensure that the forms and written materials necessary for compliance with Securities and Exchange Commission regulations are accurate and complete. They may make presentations to prospective investors about the merits of investing in the new company. Financial analysts also work in mergers and acquisitions departments, preparing analyses on the costs and benefits of a proposed merger or takeover. There are buy-side analysts and sell-side analysts.

Some financial analysts collect industry data (mainly balance sheet, income statement and capital adequacy in banking sector), merger and acquisition history and financial news for their clients. They normally standardize the different companies’ data to look uniform and facilitate their clients to do peer analysis. Their main objective is to enable their clients to make better decisions about the investment across different regions. They also provide the abundance of financial ratios calculated from the data that they gather from the financial statements that help clients to read the bottom line of the company. Many people mix up this with the data entry job but their job duties go beyond just data entry.

Some financial analysts, called ratings analysts (who are often employees of ratings agencies), evaluate the ability of companies or governments that issue bonds to repay their debt. On the basis of their evaluation, a management team assigns a rating to a company’s or government’s bonds. Other financial analysts perform budget, cost, and credit analysis as part of their responsibilities.

Analyst performance is ranked by a range of services such as StarMine owned by Thomson Reuters or Institutional Investor magazine.

Research by Numis found that small companies with the most analyst coverage outperformed peers by 2.5 per cent — while those with low coverage underperformed by 0.7 per cent.[1]

A 1999 paper by Ezra Zuckermann found that, as equity analysts divide securities by discrete sectors, companies which fall outside or across multiple sectors are punished in the ratings of analysts [2]

Qualification

At an increasingly large number of firms it is preferred that analysts earn a master’s degree in finance or the Chartered Financial Analyst (CFA) designation. There are also many regulatory requirements. For example, in the United States, sell-side or Wall Street research analysts must register with the Financial Industry Regulatory Authority (FINRA). In addition to passing the General Securities Representative Exam, candidates must pass the Research Analyst Examination (series 86/series87) in order to publish research for the purpose of selling or promoting publicly traded securities.

Compensation

The job title is a broad one. As of 2012, the median pay in the United States was $76,950 per year according to the Bureau of Labor Statistics.[3] SumZero found that professionals in the hedge fund industry average $409,826 per year including bonus and deferred pay.[4]

Controversies about financing

Analyst recommendations on stocks owned by firms employing them may be seen as potentially biased.

The research department sometimes doesn’t have the ability to bring in enough money to be a self-sustaining research company. The research analysts’ department is therefore sometimes part of the marketing department of an investment bank, brokerage, or investment advisory firm.

Since 2002 there has been extra effort to overcome perceived conflicts of interest between the investment part of the firm and the public and client research part of the firm (see accounting scandals). For example, research firms are sometimes separated into two categories, brokerage and independent. Independent researchers are not part of an investment firm and so don’t have the same incentive to issue overly favorable views on companies. But this might not be sufficient to avoid all conflicts of interest. In Europe, the Markets in Financial Instruments Directive 2004 and subsequent related legislation has in part been an attempt to clarify the exact remit of equity analysts.

Debate still exists about the way sell-side analysts are paid. Usually brokerage fees pay for their research. But this creates a temptation for analysts to act as stock sellers and to lure investors into “overtrading”.

Some consider that it would be sounder if investors had to pay for financial research separately and directly to fully independent research firms.

See also

Notes

 

 

References

  • Leon Wansleben (2012) ‘Financial Analysts’ In: K. Knorr Cetina & A. Preda (eds.), Handbook of the Sociology of Finance, Oxford: Oxford UP, pp. 250–271
  • Sol Stanley ‘ Financial Analyst.

Further reading

Source: Financial analyst, https://en.wikipedia.org/w/index.php?title=Financial_analyst&oldid=860324894 (last visited Dec. 2, 2018).

Legal Entity Identifier

A Legal Entity Identifier (or LEI) is a 20-character identifier that identifies distinct legal entities that engage in financial transactions. It is defined by ISO 17442.[1] Natural persons are not required to have an LEI; they’re eligible to have one issued, however, but only if they act in an independent business capacity.[2] The LEI is a global standard, designed to be non-proprietary data that is freely accessible to all.[3] As of October 2017, over 630,000 legal entities from more than 195 countries have now been issued with LEIs.[4]

History

At the time of the 2008 financial crisis, a single identification code unique to each financial institution was unavailable worldwide. It means that each country had different code systems to recognize the counterpart corporation of financial transactions. Accordingly, it was impossible to identify the transaction details of individual corporations, identify the counterpart of financial transactions, and calculate the total risk amount. This resulted in difficulties in estimating individual corporations’s amount of the risk exposure, analyzing risks across the market, and resolving the failing financial institutions. This is one of the factors that made it difficult for the early evolution of the financial crisis. The LEI system was developed by the 2011 G20,[5] in response to these inability of financial institutions to identify organisations uniquely, so that their financial transactions in different national jurisdictions can be fully tracked.[6] Currently, the ROC (Regulatory Oversight Committee), a coalition of financial regulators and central banks across the country, is encouraging the expansion of the LEI. Currently, the U.S. and European countries require corporations to use the legal entity identifier when reporting the details of transactions with over-the-counter derivatives to financial authorities. The first LEIs were issued in December 2012.[7]

Code structure

Structure of LEI codes
1 2 3 4 5 6 7 8 9 18 19 20
LOU-
Code
Res-
erved
Entity-
Identification
Check-
sum
G.E. Financing GmbH
5493 00 84UKLVMY22DS 16
Jaguar Land Rover Ltd
2138 00 WSGIIZCXF1P5 72
British Broadcasting Corporation
5493 00 0IBP32UQZ0KL 24

The technical specification for LEI is ISO 17442.[6] An LEI consists of a 20-character alphanumeric string, with the first 4 characters identifying the Local Operating Unit (LOU) that issued the LEI. Characters 5 and 6 are reserved as ’00’. Characters 7-18 are the unique alphanumeric string assigned to the organisation by the LOU. The final 2 characters are checksum digits.[8]

Global Operating System[9]

 
G-20
FSB(Financial Stability Board)
LEI ROC
GLEIF- Board of Directions
LOU 1, LOU 2, LOU 3 …
  • G-20 : An international organization consisting of representing 20 major countries(90 percent of the world’s GDP) like seven developed countries(G7), the chair countries of the European Union, and twelve rising nations.
  • FSB(Financial Stability Board) : An organization founded to enhance the stability of the global financial system and to oversee international finance.
  • LEI ROC(Regulatory Oversight Committee) : The best decision making organization for LEI Systems under FSB. The 40 countries ‘ financial authorities, the central bank, and IMF are represented as members by international organizations.
  • GLEIF(Global LEI Foundation) : It is responsible for controlling the LOUs in each region as a practical operating organization within the LEI system.
  • LOU(Local Operating Unit) : As the operating organization for issuing and maintaining the LEI code in each region, 31 LOU are currently active worldwide.

Need of LEI[10]

  • Who should have LEI?

Both corporations and funds involved in financial transactions need the LEI. LEI is an identification code designed to recognize all of the entities and funds involved in financial transactions worldwide.Therefore, all corporations and funds that participate in financial transactions should be issued LEI.

  • When does it need?

In the United States and Europe, the parties to the transactions must use the LEI to report to the supervisory authorities any out-of-books derivatives transactions. Recently, it has expanded their coverage such as alternative investments, fund investments, insurance, spot markets, pensions, logistics and bids.

Obtaining a Legal Entity Identifier (LEI)

The Global Legal Entity Identifier Foundation (GLEIF) is not directly issuing Legal Entity Identifiers, but instead it delegates this responsibility to Local Operating Units (LOUs). These LEI issuers supply different services. Local Operating Units can have different prices for the registration services they offer. GLEIF is responsible for monitoring LEI data quality.[11]

Advantages of LEI

  • In financial transactions, you can reduce the risk associated with the counterpart. You can measure the total risk of the other party. You can also determine the risk of a particular trading partner’s concentration.
  • You can reduce the cost of reporting tasks. Financial transactions reduce the costs of information gathering and administrative costs to the other party. You can also reduce the cost of various reporting tasks, such as reporting the details of out-of-court derivatives transactions and reporting of the regeneration and cleanup plans.
  • You can enhance market transparency. As LEI is a code of international currency, it is easy to detect market manipulation, financial fraud and other disruption acts through the sharing of international financial transaction information.

See also

References

 

 

External links

Source: Legal Entity Identifier, https://en.wikipedia.org/w/index.php?title=Legal_Entity_Identifier&oldid=853729753 (last visited Nov. 23, 2018).

Algorithmic trading

Algorithmic trading is a method of executing a large order (too large to fill all at once) using automated pre-programmed trading instructions accounting for variables such as time, price, and volume[1] to send small slices of the order (child orders) out to the market over time. They were developed so that traders do not need to constantly watch a stock and repeatedly send those slices out manually. Popular “algos” include Percentage of Volume, Pegged, VWAP, TWAP, Implementation Shortfall, Target Close. In the twenty-first century, algorithmic trading has been gaining traction with both retails and institutional traders. Algorithmic trading is not an attempt to make a trading profit. It is simply a way to minimize the cost, market impact and risk in execution of an order.[2][3] It is widely used by investment banks, pension funds, mutual funds, and hedge funds because these institutional traders need to execute large orders in markets that cannot support all of the size at once.

The term is also used to mean automated trading system. These do indeed have the goal of making a profit. Also known as black box trading, these encompass trading strategies that are heavily reliant on complex mathematical formulas and high-speed computer programs.[4][5]

Such systems run strategies including market making, inter-market spreading, arbitrage, or pure speculation such as trend following. Many fall into the category of high-frequency trading (HFT), which are characterized by high turnover and high order-to-trade ratios.[6] As a result, in February 2012, the Commodity Futures Trading Commission (CFTC) formed a special working group that included academics and industry experts to advise the CFTC on how best to define HFT.[7][8] HFT strategies utilize computers that make elaborate decisions to initiate orders based on information that is received electronically, before human traders are capable of processing the information they observe. Algorithmic trading and HFT have resulted in a dramatic change of the market microstructure, particularly in the way liquidity is provided.[9]

Emblematic examples

Profitability projections by the TABB Group, a financial services industry research firm, for the US equities HFT industry were US$1.3 billion before expenses for 2014,[10] significantly down on the maximum of US$21 billion that the 300 securities firms and hedge funds that then specialized in this type of trading took in profits in 2008,[11] which the authors had then called “relatively small” and “surprisingly modest” when compared to the market’s overall trading volume. In March 2014, Virtu Financial, a high-frequency trading firm, reported that during five years the firm as a whole was profitable on 1,277 out of 1,278 trading days,[12] losing money just one day, empirically demonstrating the law of large numbers benefit of trading thousands to millions of tiny, low-risk and low-edge trades every trading day.[13]

Algorithmic trading. Percentage of market volume.[14]

A third of all European Union and United States stock trades in 2006 were driven by automatic programs, or algorithms.[15] As of 2009, studies suggested HFT firms accounted for 60–73% of all US equity trading volume, with that number falling to approximately 50% in 2012.[16][17] In 2006, at the London Stock Exchange, over 40% of all orders were entered by algorithmic traders, with 60% predicted for 2007. American markets and European markets generally have a higher proportion of algorithmic trades than other markets, and estimates for 2008 range as high as an 80% proportion in some markets. Foreign exchange markets also have active algorithmic trading (about 25% of orders in 2006).[18] Futures markets are considered fairly easy to integrate into algorithmic trading,[19] with about 20% of options volume expected to be computer-generated by 2010.[needs update][20] Bond markets are moving toward more access to algorithmic traders.[21]

Algorithmic trading and HFT have been the subject of much public debate since the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission said in reports that an algorithmic trade entered by a mutual fund company triggered a wave of selling that led to the 2010 Flash Crash.[22][23][24][25][26][27][28][29] The same reports found HFT strategies may have contributed to subsequent volatility by rapidly pulling liquidity from the market. As a result of these events, the Dow Jones Industrial Average suffered its second largest intraday point swing ever to that date, though prices quickly recovered. (See List of largest daily changes in the Dow Jones Industrial Average.) A July, 2011 report by the International Organization of Securities Commissions (IOSCO), an international body of securities regulators, concluded that while “algorithms and HFT technology have been used by market participants to manage their trading and risk, their usage was also clearly a contributing factor in the flash crash event of May 6, 2010.”[30][31] However, other researchers have reached a different conclusion. One 2010 study found that HFT did not significantly alter trading inventory during the Flash Crash.[32] Some algorithmic trading ahead of index fund rebalancing transfers profits from investors.[33][34][35]

History

Computerization of the order flow in financial markets began in the early 1970s, with some landmarks being the introduction of the New York Stock Exchange‘s “designated order turnaround” system (DOT, and later SuperDOT), which routed orders electronically to the proper trading post, which executed them manually. The “opening automated reporting system” (OARS) aided the specialist in determining the market clearing opening price (SOR; Smart Order Routing).

Program trading is defined by the New York Stock Exchange as an order to buy or sell 15 or more stocks valued at over US$1 million total. In practice this means that all program trades are entered with the aid of a computer. In the 1980s, program trading became widely used in trading between the S&P 500 equity and futures markets.

In stock index arbitrage a trader buys (or sells) a stock index futures contract such as the S&P 500 futures and sells (or buys) a portfolio of up to 500 stocks (can be a much smaller representative subset) at the NYSE matched against the futures trade. The program trade at the NYSE would be pre-programmed into a computer to enter the order automatically into the NYSE’s electronic order routing system at a time when the futures price and the stock index were far enough apart to make a profit.

At about the same time portfolio insurance was designed to create a synthetic put option on a stock portfolio by dynamically trading stock index futures according to a computer model based on the Black–Scholes option pricing model.

Both strategies, often simply lumped together as “program trading”, were blamed by many people (for example by the Brady report) for exacerbating or even starting the 1987 stock market crash. Yet the impact of computer driven trading on stock market crashes is unclear and widely discussed in the academic community.[36]

Financial markets with fully electronic execution and similar electronic communication networks developed in the late 1980s and 1990s. In the U.S., decimalization, which changed the minimum tick size from 1/16 of a dollar (US$0.0625) to US$0.01 per share in 2001[37], may have encouraged algorithmic trading as it changed the market microstructure by permitting smaller differences between the bid and offer prices, decreasing the market-makers’ trading advantage, thus increasing market liquidity.

This increased market liquidity led to institutional traders splitting up orders according to computer algorithms so they could execute orders at a better average price. These average price benchmarks are measured and calculated by computers by applying the time-weighted average price or more usually by the volume-weighted average price.

It is over. The trading that existed down the centuries has died. We have an electronic market today. It is the present. It is the future.

Robert Greifeld, NASDAQ CEO, April 2011[38]

A further encouragement for the adoption of algorithmic trading in the financial markets came in 2001 when a team of IBM researchers published a paper[39] at the International Joint Conference on Artificial Intelligence where they showed that in experimental laboratory versions of the electronic auctions used in the financial markets, two algorithmic strategies (IBM’s own MGD, and Hewlett-Packard‘s ZIP) could consistently out-perform human traders. MGD was a modified version of the “GD” algorithm invented by Steven Gjerstad & John Dickhaut in 1996/7;[40] the ZIP algorithm had been invented at HP by Dave Cliff (professor) in 1996.[41] In their paper, the IBM team wrote that the financial impact of their results showing MGD and ZIP outperforming human traders “…might be measured in billions of dollars annually”; the IBM paper generated international media coverage.

As more electronic markets opened, other algorithmic trading strategies were introduced. These strategies are more easily implemented by computers, because machines can react more rapidly to temporary mispricing and examine prices from several markets simultaneously. For example, Chameleon (developed by BNP Paribas), Stealth[42] (developed by the Deutsche Bank), Sniper and Guerilla (developed by Credit Suisse[43]), arbitrage, statistical arbitrage, trend following, and mean reversion.

This type of trading is what is driving the new demand for low latency proximity hosting and global exchange connectivity. It is imperative to understand what latency is when putting together a strategy for electronic trading. Latency refers to the delay between the transmission of information from a source and the reception of the information at a destination. Latency is, as a lower bound, determined by the speed of light; this corresponds to about 3.3 milliseconds per 1,000 kilometers of optical fiber. Any signal regenerating or routing equipment introduces greater latency than this lightspeed baseline.

Strategies

Trading ahead of index fund rebalancing

Most retirement savings, such as private pension funds or 401(k) and individual retirement accounts in the US, are invested in mutual funds, the most popular of which are index funds which must periodically “rebalance” or adjust their portfolio to match the new prices and market capitalization of the underlying securities in the stock or other index that they track.[44][45] Profits are transferred from passive index investors to active investors, some of whom are algorithmic traders specifically exploiting the index rebalance effect. The magnitude of these losses incurred by passive investors has been estimated at 21-28bp per year for the S&P 500 and 38-77bp per year for the Russell 2000.[34] John Montgomery of Bridgeway Capital Management says that the resulting “poor investor returns” from trading ahead of mutual funds is “the elephant in the room” that “shockingly, people are not talking about.”[35]

Pairs trading

Pairs trading or pair trading is a long-short, ideally market-neutral strategy enabling traders to profit from transient discrepancies in relative value of close substitutes. Unlike in the case of classic arbitrage, in case of pairs trading, the law of one price cannot guarantee convergence of prices. This is especially true when the strategy is applied to individual stocks – these imperfect substitutes can in fact diverge indefinitely. In theory the long-short nature of the strategy should make it work regardless of the stock market direction. In practice, execution risk, persistent and large divergences, as well as a decline in volatility can make this strategy unprofitable for long periods of time (e.g. 2004-7). It belongs to wider categories of statistical arbitrage, convergence trading, and relative value strategies.[46]

Delta-neutral strategies

In finance, delta-neutral describes a portfolio of related financial securities, in which the portfolio value remains unchanged due to small changes in the value of the underlying security. Such a portfolio typically contains options and their corresponding underlying securities such that positive and negative delta components offset, resulting in the portfolio’s value being relatively insensitive to changes in the value of the underlying security.

Arbitrage

In economics and finance, arbitrage /ˈɑːrbɪtrɑːʒ/ is the practice of taking advantage of a price difference between two or more markets: striking a combination of matching deals that capitalize upon the imbalance, the profit being the difference between the market prices. When used by academics, an arbitrage is a transaction that involves no negative cash flow at any probabilistic or temporal state and a positive cash flow in at least one state; in simple terms, it is the possibility of a risk-free profit at zero cost. Example: One of the most popular Arbitrage trading opportunities is played with the S&P futures and the S&P 500 stocks. During most trading days these two will develop disparity in the pricing between the two of them. This happens when the price of the stocks which are mostly traded on the NYSE and NASDAQ markets either get ahead or behind the S&P Futures which are traded in the CME market.

Conditions for arbitrage

Arbitrage is possible when one of three conditions is met:

  • The same asset does not trade at the same price on all markets (the “law of one price” is temporarily violated).
  • Two assets with identical cash flows do not trade at the same price.
  • An asset with a known price in the future does not today trade at its future price discounted at the risk-free interest rate (or, the asset does not have negligible costs of storage; as such, for example, this condition holds for grain but not for securities).

Arbitrage is not simply the act of buying a product in one market and selling it in another for a higher price at some later time. The long and short transactions should ideally occur simultaneously to minimize the exposure to market risk, or the risk that prices may change on one market before both transactions are complete. In practical terms, this is generally only possible with securities and financial products which can be traded electronically, and even then, when first leg(s) of the trade is executed, the prices in the other legs may have worsened, locking in a guaranteed loss. Missing one of the legs of the trade (and subsequently having to open it at a worse price) is called ‘execution risk’ or more specifically ‘leg-in and leg-out risk’.[a]

In the simplest example, any good sold in one market should sell for the same price in another. Traders may, for example, find that the price of wheat is lower in agricultural regions than in cities, purchase the good, and transport it to another region to sell at a higher price. This type of price arbitrage is the most common, but this simple example ignores the cost of transport, storage, risk, and other factors. “True” arbitrage requires that there be no market risk involved. Where securities are traded on more than one exchange, arbitrage occurs by simultaneously buying in one and selling on the other. Such simultaneous execution, if perfect substitutes are involved, minimizes capital requirements, but in practice never creates a “self-financing” (free) position, as many sources incorrectly assume following the theory. As long as there is some difference in the market value and riskiness of the two legs, capital would have to be put up in order to carry the long-short arbitrage position.

Mean reversion

Mean reversion is a mathematical methodology sometimes used for stock investing, but it can be applied to other processes. In general terms the idea is that both a stock’s high and low prices are temporary, and that a stock’s price tends to have an average price over time. An example of a mean-reverting process is the Ornstein-Uhlenbeck stochastic equation.

Mean reversion involves first identifying the trading range for a stock, and then computing the average price using analytical techniques as it relates to assets, earnings, etc.

When the current market price is less than the average price, the stock is considered attractive for purchase, with the expectation that the price will rise. When the current market price is above the average price, the market price is expected to fall. In other words, deviations from the average price are expected to revert to the average.

The standard deviation of the most recent prices (e.g., the last 20) is often used as a buy or sell indicator.

Stock reporting services (such as Yahoo! Finance, MS Investor, Morningstar, etc.), commonly offer moving averages for periods such as 50 and 100 days. While reporting services provide the averages, identifying the high and low prices for the study period is still necessary.

Scalping

Scalping is liquidity provision by non-traditional market makers, whereby traders attempt to earn (or make) the bid-ask spread. This procedure allows for profit for so long as price moves are less than this spread and normally involves establishing and liquidating a position quickly, usually within minutes or less.

A market maker is basically a specialized scalper. The volume a market maker trades is many times more than the average individual scalper and would make use of more sophisticated trading systems and technology. However, registered market makers are bound by exchange rules stipulating their minimum quote obligations. For instance, NASDAQ requires each market maker to post at least one bid and one ask at some price level, so as to maintain a two-sided market for each stock represented.

Transaction cost reduction

Most strategies referred to as algorithmic trading (as well as algorithmic liquidity-seeking) fall into the cost-reduction category. The basic idea is to break down a large order into small orders and place them in the market over time. The choice of algorithm depends on various factors, with the most important being volatility and liquidity of the stock. For example, for a highly liquid stock, matching a certain percentage of the overall orders of stock (called volume inline algorithms) is usually a good strategy, but for a highly illiquid stock, algorithms try to match every order that has a favorable price (called liquidity-seeking algorithms).

The success of these strategies is usually measured by comparing the average price at which the entire order was executed with the average price achieved through a benchmark execution for the same duration. Usually, the volume-weighted average price is used as the benchmark. At times, the execution price is also compared with the price of the instrument at the time of placing the order.

A special class of these algorithms attempts to detect algorithmic or iceberg orders on the other side (i.e. if you are trying to buy, the algorithm will try to detect orders for the sell side). These algorithms are called sniffing algorithms. A typical example is “Stealth.”

Some examples of algorithms are TWAP, VWAP, Implementation shortfall, POV, Display size, Liquidity seeker, and Stealth. Modern algorithms are often optimally constructed via either static or dynamic programming .[47] [48] [49]

Strategies that only pertain to dark pools

Recently, HFT, which comprises a broad set of buy-side as well as market making sell side traders, has become more prominent and controversial.[50] These algorithms or techniques are commonly given names such as “Stealth” (developed by the Deutsche Bank), “Iceberg”, “Dagger”, “Guerrilla”, “Sniper”, “BASOR” (developed by Quod Financial) and “Sniffer”.[51] Dark pools are alternative trading systems that are private in nature—and thus do not interact with public order flow—and seek instead to provide undisplayed liquidity to large blocks of securities.[52] In dark pools trading takes place anonymously, with most orders hidden or “iceberged.”[53] Gamers or “sharks” sniff out large orders by “pinging” small market orders to buy and sell. When several small orders are filled the sharks may have discovered the presence of a large iceberged order.

“Now it’s an arms race,” said Andrew Lo, director of the Massachusetts Institute of Technology’s Laboratory for Financial Engineering. “Everyone is building more sophisticated algorithms, and the more competition exists, the smaller the profits.”[54]

Market timing

Strategies designed to generate alpha are considered market timing strategies. These types of strategies are designed using a methodology that includes backtesting, forward testing and live testing. Market timing algorithms will typically use technical indicators such as moving averages but can also include pattern recognition logic implemented using Finite State Machines.

Backtesting the algorithm is typically the first stage and involves simulating the hypothetical trades through an in-sample data period. Optimization is performed in order to determine the most optimal inputs. Steps taken to reduce the chance of over optimization can include modifying the inputs +/- 10%, schmooing the inputs in large steps, running monte carlo simulations and ensuring slippage and commission is accounted for.[55]

Forward testing the algorithm is the next stage and involves running the algorithm through an out of sample data set to ensure the algorithm performs within backtested expectations.

Live testing is the final stage of development and requires the developer to compare actual live trades with both the backtested and forward tested models. Metrics compared include percent profitable, profit factor, maximum drawdown and average gain per trade.

High-frequency trading

As noted above, high-frequency trading (HFT) is a form of algorithmic trading characterized by high turnover and high order-to-trade ratios. Although there is no single definition of HFT, among its key attributes are highly sophisticated algorithms, specialized order types, co-location, very short-term investment horizons, and high cancellation rates for orders.[6] In the U.S., high-frequency trading (HFT) firms represent 2% of the approximately 20,000 firms operating today, but account for 73% of all equity trading volume.[citation needed] As of the first quarter in 2009, total assets under management for hedge funds with HFT strategies were US$141 billion, down about 21% from their high.[56] The HFT strategy was first made successful by Renaissance Technologies.[57] High-frequency funds started to become especially popular in 2007 and 2008.[57] Many HFT firms are market makers and provide liquidity to the market, which has lowered volatility and helped narrow Bid-offer spreads making trading and investing cheaper for other market participants.[56][58][59] HFT has been a subject of intense public focus since the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission stated that both algorithmic trading and HFT contributed to volatility in the 2010 Flash Crash. Among the major U.S. high frequency trading firms are Chicago Trading, Virtu Financial, Timber Hill, ATD, GETCO, and Citadel LLC.[60]

There are four key categories of HFT strategies: market-making based on order flow, market-making based on tick data information, event arbitrage and statistical arbitrage. All portfolio-allocation decisions are made by computerized quantitative models. The success of computerized strategies is largely driven by their ability to simultaneously process volumes of information, something ordinary human traders cannot do.

Market making

Market making involves placing a limit order to sell (or offer) above the current market price or a buy limit order (or bid) below the current price on a regular and continuous basis to capture the bid-ask spread. Automated Trading Desk, which was bought by Citigroup in July 2007, has been an active market maker, accounting for about 6% of total volume on both NASDAQ and the New York Stock Exchange.[61]

Statistical arbitrage

Another set of HFT strategies in classical arbitrage strategy might involve several securities such as covered interest rate parity in the foreign exchange market which gives a relation between the prices of a domestic bond, a bond denominated in a foreign currency, the spot price of the currency, and the price of a forward contract on the currency. If the market prices are sufficiently different from those implied in the model to cover transaction cost then four transactions can be made to guarantee a risk-free profit. HFT allows similar arbitrages using models of greater complexity involving many more than 4 securities. The TABB Group estimates that annual aggregate profits of low latency arbitrage strategies currently exceed US$21 billion.[16]

A wide range of statistical arbitrage strategies have been developed whereby trading decisions are made on the basis of deviations from statistically significant relationships. Like market-making strategies, statistical arbitrage can be applied in all asset classes.

Event arbitrage

A subset of risk, merger, convertible, or distressed securities arbitrage that counts on a specific event, such as a contract signing, regulatory approval, judicial decision, etc., to change the price or rate relationship of two or more financial instruments and permit the arbitrageur to earn a profit.[62]

Merger arbitrage also called risk arbitrage would be an example of this. Merger arbitrage generally consists of buying the stock of a company that is the target of a takeover while shorting the stock of the acquiring company. Usually the market price of the target company is less than the price offered by the acquiring company. The spread between these two prices depends mainly on the probability and the timing of the takeover being completed as well as the prevailing level of interest rates. The bet in a merger arbitrage is that such a spread will eventually be zero, if and when the takeover is completed. The risk is that the deal “breaks” and the spread massively widens.

Spoofing

One strategy that some traders have employed, which has been proscribed yet likely continues, is called spoofing. It is the act of placing orders to give the impression of wanting to buy or sell shares, without ever having the intention of letting the order execute to temporarily manipulate the market to buy or sell shares at a more favorable price. This is done by creating limit orders outside the current bid or ask price to change the reported price to other market participants. The trader can subsequently place trades based on the artificial change in price, then canceling the limit orders before they are executed.

Suppose a trader desires to sell shares of a company with a current bid of $20 and a current ask of $20.20. The trader would place a buy order at $20.10, still some distance from the ask so it will not be executed, and the $20.10 bid is reported as the National Best Bid and Offer best bid price. The trader then executes a market order for the sale of the shares they wished to sell. Because the best bid price is the investor’s artificial bid, a market maker fills the sale order at $20.10, allowing for a $.10 higher sale price per share. The trader subsequently cancels their limit order on the purchase he never had the intention of completing.

Quote stuffing

Quote stuffing is a tactic employed by malicious traders that involves quickly entering and withdrawing large quantities of orders in an attempt to flood the market, thereby gaining an advantage over slower market participants.[63] The rapidly placed and canceled orders cause market data feeds that ordinary investors rely on to delay price quotes while the stuffing is occurring. HFT firms benefit from proprietary, higher-capacity feeds and the most capable, lowest latency infrastructure. Researchers showed high-frequency traders are able to profit by the artificially induced latencies and arbitrage opportunities that result from quote stuffing.[64]

Low latency trading systems

Network-induced latency, a synonym for delay, measured in one-way delay or round-trip time, is normally defined as how much time it takes for a data packet to travel from one point to another.[65] Low latency trading refers to the algorithmic trading systems and network routes used by financial institutions connecting to stock exchanges and electronic communication networks (ECNs) to rapidly execute financial transactions.[66] Most HFT firms depend on low latency execution of their trading strategies. Joel Hasbrouck and Gideon Saar (2013) measure latency based on three components: the time it takes for 1) information to reach the trader, 2) the trader’s algorithms to analyze the information, and 3) the generated action to reach the exchange and get implemented.[67] In a contemporary electronic market (circa 2009), low latency trade processing time was qualified as under 10 milliseconds, and ultra-low latency as under 1 millisecond.[68]

Low-latency traders depend on ultra-low latency networks. They profit by providing information, such as competing bids and offers, to their algorithms microseconds faster than their competitors.[16] The revolutionary advance in speed has led to the need for firms to have a real-time, colocated trading platform to benefit from implementing high-frequency strategies.[16] Strategies are constantly altered to reflect the subtle changes in the market as well as to combat the threat of the strategy being reverse engineered by competitors. This is due to the evolutionary nature of algorithmic trading strategies – they must be able to adapt and trade intelligently, regardless of market conditions, which involves being flexible enough to withstand a vast array of market scenarios. As a result, a significant proportion of net revenue from firms is spent on the R&D of these autonomous trading systems.[16]

Strategy implementation

Most of the algorithmic strategies are implemented using modern programming languages, although some still implement strategies designed in spreadsheets. Increasingly, the algorithms used by large brokerages and asset managers are written to the FIX Protocol’s Algorithmic Trading Definition Language (FIXatdl), which allows firms receiving orders to specify exactly how their electronic orders should be expressed. Orders built using FIXatdl can then be transmitted from traders’ systems via the FIX Protocol.[69] Basic models can rely on as little as a linear regression, while more complex game-theoretic and pattern recognition[70] or predictive models can also be used to initiate trading. More complex methods such as Markov Chain Monte Carlo have been used to create these models.[citation needed]

Issues and developments

Algorithmic trading has been shown to substantially improve market liquidity[71] among other benefits. However, improvements in productivity brought by algorithmic trading have been opposed by human brokers and traders facing stiff competition from computers.

Cyborg finance

Technological advances in finance, particularly those relating to algorithmic trading, has increased financial speed, connectivity, reach, and complexity while simultaneously reducing its humanity. Computers running software based on complex algorithms have replaced humans in many functions in the financial industry. Finance is essentially becoming an industry where machines and humans share the dominant roles – transforming modern finance into what one scholar has called, “cyborg finance.”[72]

Concerns

While many experts laud the benefits of innovation in computerized algorithmic trading, other analysts have expressed concern with specific aspects of computerized trading.

“The downside with these systems is their black box-ness,” Mr. Williams said. “Traders have intuitive senses of how the world works. But with these systems you pour in a bunch of numbers, and something comes out the other end, and it’s not always intuitive or clear why the black box latched onto certain data or relationships.” [54]

“The Financial Services Authority has been keeping a watchful eye on the development of black box trading. In its annual report the regulator remarked on the great benefits of efficiency that new technology is bringing to the market. But it also pointed out that ‘greater reliance on sophisticated technology and modelling brings with it a greater risk that systems failure can result in business interruption’.” [73]

UK Treasury minister Lord Myners has warned that companies could become the “playthings” of speculators because of automatic high-frequency trading. Lord Myners said the process risked destroying the relationship between an investor and a company.[74]

Other issues include the technical problem of latency or the delay in getting quotes to traders,[75] security and the possibility of a complete system breakdown leading to a market crash.[76]

“Goldman spends tens of millions of dollars on this stuff. They have more people working in their technology area than people on the trading desk…The nature of the markets has changed dramatically.” [77]

On August 1, 2012 Knight Capital Group experienced a technology issue in their automated trading system,[78] causing a loss of $440 million.

This issue was related to Knight’s installation of trading software and resulted in Knight sending numerous erroneous orders in NYSE-listed securities into the market. This software has been removed from the company’s systems. [..] Clients were not negatively affected by the erroneous orders, and the software issue was limited to the routing of certain listed stocks to NYSE. Knight has traded out of its entire erroneous trade position, which has resulted in a realized pre-tax loss of approximately $440 million.

Algorithmic and high-frequency trading were shown to have contributed to volatility during the May 6, 2010 Flash Crash,[22][24] when the Dow Jones Industrial Average plunged about 600 points only to recover those losses within minutes. At the time, it was the second largest point swing, 1,010.14 points, and the biggest one-day point decline, 998.5 points, on an intraday basis in Dow Jones Industrial Average history.[79]

Recent developments

Financial market news is now being formatted by firms such as Need To Know News, Thomson Reuters, Dow Jones, and Bloomberg, to be read and traded on via algorithms.

“Computers are now being used to generate news stories about company earnings results or economic statistics as they are released. And this almost instantaneous information forms a direct feed into other computers which trade on the news.”[80]

The algorithms do not simply trade on simple news stories but also interpret more difficult to understand news. Some firms are also attempting to automatically assign sentiment (deciding if the news is good or bad) to news stories so that automated trading can work directly on the news story.[81]

“Increasingly, people are looking at all forms of news and building their own indicators around it in a semi-structured way,” as they constantly seek out new trading advantages said Rob Passarella, global director of strategy at Dow Jones Enterprise Media Group. His firm provides both a low latency news feed and news analytics for traders. Passarella also pointed to new academic research being conducted on the degree to which frequent Google searches on various stocks can serve as trading indicators, the potential impact of various phrases and words that may appear in Securities and Exchange Commission statements and the latest wave of online communities devoted to stock trading topics.[81]

“Markets are by their very nature conversations, having grown out of coffee houses and taverns,” he said. So the way conversations get created in a digital society will be used to convert news into trades, as well, Passarella said.[81]

“There is a real interest in moving the process of interpreting news from the humans to the machines” says Kirsti Suutari, global business manager of algorithmic trading at Reuters. “More of our customers are finding ways to use news content to make money.”[80]

An example of the importance of news reporting speed to algorithmic traders was an advertising campaign by Dow Jones (appearances included page W15 of the Wall Street Journal, on March 1, 2008) claiming that their service had beaten other news services by two seconds in reporting an interest rate cut by the Bank of England.

In July 2007, Citigroup, which had already developed its own trading algorithms, paid $680 million for Automated Trading Desk, a 19-year-old firm that trades about 200 million shares a day.[82] Citigroup had previously bought Lava Trading and OnTrade Inc.

In late 2010, The UK Government Office for Science initiated a Foresight project investigating the future of computer trading in the financial markets,[83] led by Dame Clara Furse, ex-CEO of the London Stock Exchange and in September 2011 the project published its initial findings in the form of a three-chapter working paper available in three languages, along with 16 additional papers that provide supporting evidence.[84] All of these findings are authored or co-authored by leading academics and practitioners, and were subjected to anonymous peer-review. Released in 2012, the Foresight study acknowledged issues related to periodic illiquidity, new forms of manipulation and potential threats to market stability due to errant algorithms or excessive message traffic. However, the report was also criticized for adopting “standard pro-HFT arguments” and advisory panel members being linked to the HFT industry.[85]

System architecture

A traditional trading system consists of primarily of two blocks – one that receives the market data while the other that sends the order request to the exchange. However, an algorithmic trading system can be broken down into three parts [86]

  1. Exchange
  2. The server
  3. Application
Traditional architecture of algorithmic trading systems

Traditional architecture of algorithmic trading systems

Exchange(s) provide data to the system, which typically consists of the latest order book, traded volumes, and last traded price (LTP) of scrip. The server in turn receives the data simultaneously acting as a store for historical database. The data is analyzed at the application side, where trading strategies are fed from the user and can be viewed on the GUI. Once the order is generated, it is sent to the order management system (OMS), which in turn transmits it to the exchange.

Gradually, old-school, high latency architecture of algorithmic systems is being replaced by newer, state-of-the-art, high infrastructure, low-latency networks. The complex event processing engine (CEP), which is the heart of decision making in algo-based trading systems, is used for order routing and risk management.

With the emergence of the FIX (Financial Information Exchange) protocol, the connection to different destinations has become easier and the go-to market time has reduced, when it comes to connecting with a new destination. With the standard protocol in place, integration of third-party vendors for data feeds is not cumbersome anymore.

Emergence of protocols in algorithmic trading

Emergence of protocols in algorithmic trading

Effects

Though its development may have been prompted by decreasing trade sizes caused by decimalization, algorithmic trading has reduced trade sizes further. Jobs once done by human traders are being switched to computers. The speeds of computer connections, measured in milliseconds and even microseconds, have become very important.[87][88]

More fully automated markets such as NASDAQ, Direct Edge and BATS (formerly an acronym for Better Alternative Trading System) in the US, have gained market share from less automated markets such as the NYSE. Economies of scale in electronic trading have contributed to lowering commissions and trade processing fees, and contributed to international mergers and consolidation of financial exchanges.

Competition is developing among exchanges for the fastest processing times for completing trades. For example, in June 2007, the London Stock Exchange launched a new system called TradElect that promises an average 10 millisecond turnaround time from placing an order to final confirmation and can process 3,000 orders per second.[89] Since then, competitive exchanges have continued to reduce latency with turnaround times of 3 milliseconds available. This is of great importance to high-frequency traders, because they have to attempt to pinpoint the consistent and probable performance ranges of given financial instruments. These professionals are often dealing in versions of stock index funds like the E-mini S&Ps, because they seek consistency and risk-mitigation along with top performance. They must filter market data to work into their software programming so that there is the lowest latency and highest liquidity at the time for placing stop-losses and/or taking profits. With high volatility in these markets, this becomes a complex and potentially nerve-wracking endeavor, where a small mistake can lead to a large loss. Absolute frequency data play into the development of the trader’s pre-programmed instructions.[90]

In the U.S., spending on computers and software in the financial industry increased to $26.4 billion in 2005.[2][91]

Algorithmic trading has caused a shift in the types of employees working in the financial industry. For example, many physicists have entered the financial industry as quantitative analysts. Some physicists have even begun to do research in economics as part of doctoral research. This interdisciplinary movement is sometimes called econophysics.[92] Some researchers also cite a “cultural divide” between employees of firms primarily engaged in algorithmic trading and traditional investment managers. Algorithmic trading has encouraged an increased focus on data and had decreased emphasis on sell-side research.[93]

Communication standards

Algorithmic trades require communicating considerably more parameters than traditional market and limit orders. A trader on one end (the “buy side“) must enable their trading system (often called an “order management system” or “execution management system“) to understand a constantly proliferating flow of new algorithmic order types. The R&D and other costs to construct complex new algorithmic orders types, along with the execution infrastructure, and marketing costs to distribute them, are fairly substantial. What was needed was a way that marketers (the “sell side“) could express algo orders electronically such that buy-side traders could just drop the new order types into their system and be ready to trade them without constant coding custom new order entry screens each time.

FIX Protocol is a trade association that publishes free, open standards in the securities trading area. The FIX language was originally created by Fidelity Investments, and the association Members include virtually all large and many midsized and smaller broker dealers, money center banks, institutional investors, mutual funds, etc. This institution dominates standard setting in the pretrade and trade areas of security transactions. In 2006–2007 several members got together and published a draft XML standard for expressing algorithmic order types. The standard is called FIX Algorithmic Trading Definition Language (FIXatdl).[94]

See also

Notes

  1. As an arbitrage consists of at least two trades, the metaphor is of putting on a pair of pants, one leg (trade) at a time. The risk that one trade (leg) fails to execute is thus ‘leg risk’.

References

 

 

Source: Algorithmic trading, https://en.wikipedia.org/w/index.php?title=Algorithmic_trading&oldid=871194564 (last visited Dec. 2, 2018).

Automatisierter Handel

Automatisierter oder algorithmischer Handel (auch Algorithmic Trading, Algo Trading, Black Box, High Frequency Trading, Flash Trading[1] oder Grey Box Trading) bezeichnet umgangssprachlich allgemein den automatischen Handel von Wertpapieren durch Computerprogramme.

Nach dem Wertpapierhandelsgesetz (§ 80 Abs. 2 WpHG) wird der algorithmische Handel beschrieben als Handel mit Finanzinstrumenten, bei denen ein Computeralgorithmus über die Ausführung und die Parameter des Auftrags automatisch entscheidet. Ausgenommen sind davon Systeme, die Aufträge nur bestätigen oder an andere Handelsplätze weiterleiten.

Bis dato hat sich keine eindeutige Definition in der Literatur der Wirtschaftsinformatik und der Wirtschaftswissenschaft durchgesetzt. Viele Autoren verstehen darunter Computerprogramme, die dazu genutzt werden, bestehende Kauf- und Verkaufsaufträge (Orders) auf elektronischem Wege an die Börse zu leiten.[2] Die andere Gruppe von Autoren versteht darunter Computerprogramme, die selbständig Kauf- und Verkaufsentscheidungen treffen.[3] In diesem Kontext kann man Algorithmic Trading bei Buy-side– und Sell-side-Finanzinstituten unterscheiden.

Geschichte

Zur Entwicklung des automatisierten Handels: Börsen berichten von einem Anteil bis zu 50 Prozent am Umsatz. An der Eurex hat sich der automatisierte Handel von 2004 bis 2006 vervierfacht. Der traditionelle Handel ist dagegen nur leicht gewachsen. Die EUREX nimmt an, dass momentan ca. 20–30 % des gesamten Umsatzes durch automatisierten Handel entsteht. Innerhalb der EUREX rechnet man mit einer Wachstumsrate von etwa 20 % pro Jahr. Laut einer Studie der AITE Group waren 2006 etwa ein Drittel aller Wertpapierhandel von automatischen Computerprogrammen und Algorithmen gesteuert. AITE schätzt, dass dieser Anteil bis 2010 etwa 50 % erreichen könnte.[4] Wie Gomolka darstellt, sind diese Zahlen zum Börsenumsatz jedoch kritisch zu werten.[3] Denn die Börsen sehen nur diejenigen Orders, die von Maschinen an die Börse übermittelt und in den elektronischen Orderbüchern aufgefangen werden (siehe Transaktionsunterstützung). Welcher Anteil des Börsenumsatzes von Maschinen generiert wird (siehe Entscheidungsunterstützung) und welcher Anteil durch menschliche Händler in die Ordersysteme eingegeben wird, kann von den Börsen nicht gemessen werden.

Anfang Juli 2009 wurde ein ehemaliger Mitarbeiter des amerikanischen Finanzdienstleisters Goldman Sachs vom FBI verhaftet, da er Teile der Software gestohlen haben soll, die von dem Unternehmen zum automatisierten Handel genutzt wird. Die Software sei laut Staatsanwaltschaft zudem geeignet, „um Märkte auf unfaire Weise zu manipulieren“.[5] Er wurde jedoch inzwischen freigesprochen, da er nach US Recht keinen physikalischen Gegenstand gestohlen hatte. Zum größtenteil waren die Programme die er mitnahm nur von ihm selbst verbesserte Open Source Programme.[6]

Algorithmic Trading zur Orderaufgabe

Je nach Automatisierungsgrad kann der Computer selbständig über bestimmte Aspekte der Order entscheiden (Timing, Preis, Volumen oder Zeitpunkt der Orderaufgabe). Im sogenannten „Sell Side Algo-Trading“ (z. B. Brokerages) werden große Orders in mehrere kleinere Handel aufgeteilt. Damit können Market Impact, Opportunitätskosten und Risiken gesteuert werden.[7] Der Algorithmus legt das Aufsplitten und den Zeitpunkt (Timing) der Orders anhand vordefinierter Parameter fest. Diese Parameter nutzen üblicherweise sowohl historische als auch aktuelle Marktdaten. Algorithmischer Handel wird von Brokern zum einen für den Eigenhandel verwendet, zum anderen aber auch den Kunden der Broker als Dienstleistung angeboten (Aufgrund der Komplexität und Ressourcenlage haben institutionelle Investoren einen gewissen Drang, auf Lösungen von Brokern zuzugreifen). Der Vorteil automatisierten Handels ist die hohe Geschwindigkeit, in der sie Geschäfte platzieren können, und die im Vergleich zum Menschen höhere Menge an relevanten Informationen, die sie beobachten und verarbeiten. Damit gehen auch geringere Transaktionskosten einher.[8] Voraussetzung für algorithmischen Handel ist, dass bereits eine Order bzw. eine Handelsstrategie vorliegt. Hier geht es im Gegensatz zu automatischem Handel bzw. Quote-Maschinen darum, eine Order intelligent auf verschiedenen Märkten zu verteilen. Es geht nicht darum, anhand von Parametern automatisch Angebote in den Markt zu schießen.

Automatisierter Handel als Entscheidungsunterstützung

Automatisierter Handel wird von Hedgefonds, Pensionsfonds, Investmentfonds, Banken und anderen institutionellen Anlegern genutzt, um Orders automatisch zu generieren und/oder auszuführen. Hier generieren Computer selbständig Kauf- und Verkaufssignale, die in Orders auf dem Finanzplatz umgesetzt werden, bevor Menschen überhaupt eingreifen können. Algorithmic Trading kann mit jeder Investment-Strategie benutzt werden: Market Making, Inter-Market Spreading, Arbitrage, Trendfolgemodelle oder Spekulationen. Die konkrete Anwendung von Computermodellen bei der Investmententscheidung und Durchführung ist unterschiedlich. So können Computer entweder nur unterstützend für die Investment-Analyse eingesetzt werden (Quant Fonds) oder die Orders sowohl automatisch generiert als auch an die Finanzplätze weitergeleitet werden (Autopilot). Die Schwierigkeit bei Algorithmic Trading liegt in der Aggregation und Analyse historischer Marktdaten sowie der Aggregation von Real-time-Kursen, um den Handel zu ermöglichen. Ebenso ist das Aufstellen und Testen mathematischer Modelle nicht trivial.

Abgrenzung High Frequency Trading und Systematic Trading

In der Literatur wird Algorithmic Trading oft mit Hochfrequenzhandel gleichgesetzt, bei dem Wertpapiere in Sekundenbruchteilen ge- und wieder verkauft werden. Einer Studie von FINalternatives zufolge kategorisieren Fondsmanager den Bereich des Algorithmic Trading aber höchst unterschiedlich.[9] So verstehen über 60 % der Befragten unter Hochfrequenzhandel Transaktionen im Zeitraum von 1 s bis 10 Minuten. Ca. 15 % der Befragten verstehen darunter Transaktionen im Zeitraum von 1–5 Tagen. Aldridge (2009) kategorisiert Algorithmic Trading ausschließlich als Hochfrequenzhandel.[10] Gomolka (2011) hingegen fasst unter dem Algorithmic Trading sowohl das High-Frequency Trading (in Sekundenbruchteilen) also auch das Systematic Trading (längerfristig über mehrere Tage) zusammen [11]. Er betont, dass Computerprogramme nicht nur kurzfristig (z. B. zum Flash Trading) eingesetzt werden, sondern auch langfristig im Ablauf mehrerer Minuten, Stunden oder Tage selbständig handeln können.

Auswirkungen auf die Finanzmarktstabilität

Im Gegensatz zur Computerbörse, bei der Computer nur als Kommunikationsplattform für die Verknüpfung von passenden Kauf- und Verkaufsangeboten dienen, platziert das System selbständig solche Angebote und sucht sich Handelspartner. Sie werden mitverantwortlich gemacht für den Börsenkrach am 19. Oktober 1987, den Schwarzen Montag. Ihre „Wenn-dann“-Algorithmen sollen dafür gesorgt haben, dass immer mehr Aktienpakete abgestoßen wurden, nachdem die Kurse zu fallen begonnen hatten, was letztlich zu panikartigen Verkäufen geführt habe. Am 6. Mai 2010 fiel der Dow Jones innerhalb von acht Minuten um über 1000 Punkte. Ausgelöst wurde dieser Crash jedoch ursprünglich nicht durch High Frequency Programme, sondern eine Sell-Order des Handelshauses Waddell & Reed, das 75,000 S&P500 E-Mini Future Kontrakte innerhalb 20 Minuten per Market Order in den Markt gab. Dieser Flash Crash veranlasste die SEC zu einer Verschärfung ihrer Circuit-Breaker-Regeln, wonach zukünftig Kurseinbrüche von über 10 % bei einer Aktie zu einem automatischen Aussetzen des Handels führen sollen.[12]

Weblinks

Einzelnachweise

 

Quelle: Seite „Automatisierter Handel“. In: Wikipedia, Die freie Enzyklopädie. Bearbeitungsstand: 17. November 2018, 14:17 UTC. URL: https://de.wikipedia.org/w/index.php?title=Automatisierter_Handel&oldid=182833450 (Abgerufen: 2. Dezember 2018, 17:06 UTC)

 

Trading hours

All stock exchanges have specified trading hours.

The following is a list of opening and closing times for stock and futures exchanges worldwide. It includes a partial list of stock exchanges and the corresponding times the exchange opens and closes, along with the time zone within which the exchange is located. Markets are open Monday through Friday and closed on Saturday and Sunday in their respective local time zones.[1]

World exchanges[notes 1] Time zone[notes 2] Local time UTC[notes 3]
Name ID Country City Zone Δ DST Open Close Lunch Open Close Lunch
New Zealand New Zealand Stock Market NZSX New Zealand Wellington NZST +12 Sep–Apr 10:00 16:45 No 22:00 05:00 No
Australia Australian Securities Exchange ASX Australia Sydney AEST +10 Oct–Apr 10:00 16:00 No 00:00 06:00 No
Japan Tokyo Stock Exchange TSE Japan Tokyo JST +9 09:00 15:00[2] 11:30–12:30 00:00 06:00 02:30–03:30
South Korea Korea Stock Exchange KRX South Korea Busan & Seoul KST +9 09:00 15:30 No 00:00 06:00 No
Malaysia Bursa Malaysia MYX Malaysia Kuala Lumpur MYT +8 09:00 17:00 12:30–14:30 01:00 09:00 04:30–06:30
Singapore Singapore Exchange SGX Singapore Singapore SGT +8 09:00 17:00 12:00–13:00[3] 01:00 09:00 Yes
Taiwan Taiwan Stock Exchange TWSE Taiwan (Republic of China) Taipei CST +8 09:00 13:30 No 01:00 05:30 No
Hong Kong Hong Kong Futures Exchange HKFE Hong Kong Hong Kong HKT +8 09:15 16:00 12:00–13:00 01:15 08:00 04:00–05:00
Hong Kong Hong Kong Stock Exchange HKEX Hong Kong Hong Kong HKT +8 09:30[4] 16:00 12:00–13:00 01:30 08:00 04:00–05:00
China Shanghai Stock Exchange SSE China Shanghai CST +8 09:30 15:00[notes 4] 11:30–13:00 01:30 07:00 03:30–05:00
China Shenzhen Stock Exchange SZSE China Shenzhen CST +8 09:30 15:00[notes 4] 11:30–13:00 01:30 07:00 03:30–05:00
Philippines Philippine Stock Exchange PSE Philippines Manila PHT +8 09:30 15:30 12:00–13:30 01:30 07:30 04:00–05:30
Indonesia Indonesia Stock Exchange IDX Indonesia Jakarta WIB +7 09:00 16:00 12:00-13:30 (Mon-Thu) 11:30-14:00 (Fri) 02:00 09:00 05:00-06:30 (Mon-Thu) 04:30-07:00 (Fri)
Vietnam Hochiminh Stock Exchange HOSE Vietnam Hochiminh ICT +7 09:00 14:45 11:30–13:00 02:00 07:45 04:30–06:00
Vietnam Hanoi Stock Exchange HNX Vietnam Hanoi ICT +7 09:00 14:45 11:30–13:00 02:00 07:45 04:30–06:00
Thailand Stock Exchange of Thailand SET Thailand Bangkok ICT +7 10:00 16:30 12:30–14:30 03:00 09:30 05:30–07:30
Bangladesh Chittagong Stock Exchange CSE Bangladesh Chittagong BST +6 10:30 14:30 No 04:30 08:30 No
Bangladesh Dhaka Stock Exchange DSE Bangladesh Dhaka BST +6 10:30 14:30 No 04:30 08:30 No
India Bombay Stock Exchange[5] BSE India Mumbai IST +5.5 09:15 15:30 No 03:45 10:00 No
India National Stock Exchange of India[6] NSE India Mumbai IST +5.5 09:15 15:30 No 03:45 10:00 No
Sri Lanka Colombo Stock Exchange CSE Sri Lanka Colombo SLST +5.5 09:30 14:30 No 04:00 09:00 No
Pakistan Pakistan Stock Exchange PSX Pakistan Karachi PKT +5 09:30 15:30 No 04:30 10:30 No
Iran Tehran Stock Exchange TSE Iran Tehran IRST +3.5 09:00 12:30 No 05:30 09:00 No
TanzaniaDar es Salaam Stock Exchange DSE Tanzania Dar es Salaam EAT +3 08:00 17:00 No 05:00 14:00 No
Russia Moscow Exchange MOEX Russia Moscow MSK +3 10:00 18:45 No 07:00 15:45 No
Saudi Arabia Saudi Stock Exchange TADAWUL Saudi Arabia Riyadh AST +3 10:00 15:00 No 07:00 12:00 No
Kenya Nairobi Securities Exchange NSE Kenya Nairobi UTC +3 09:30 15:00 No 06:30 12:00 No
Finland Helsinki Stock Exchange OMX Finland Helsinki EET +2 Mar–Oct 10:00 18:30 No 08:00 16:30 No
Ukraine Ukrainian Exchange UX Ukraine Kiev EET +2 Mar–Oct 10:00 17:30 No 08:00 15:30 No
Latvia Riga Stock Exchange OMXR Latvia Riga EET +2 Mar–Oct 10:00 16:00 No 08:00 14:00 No
Estonia Tallinn Stock Exchange OMXT Estonia Tallinn EET +2 Mar–Oct 10:00 16:00 No 08:00 14:00 No
Lithuania NASDAQ OMX Vilnius OMXV Lithuania Vilnius EET +2 Mar–Oct 10:00 16:00 No 08:00 14:00 No
Jordan Amman Stock Exchange ASE Jordan Amman EET +2 Mar–Oct 10:00 12:00 No 08:00 10:00 No
Israel Tel Aviv Stock Exchange TASE Israel Tel Aviv IST +2 Mar–Oct 09:00 17:30 No 07:00 15:30 No
Lebanon Beirut Stock Exchange BSE Lebanon Beirut EET +2 Mar–Oct 09:30 12:30 No 07:30 10:30 No
South Africa Johannesburg Stock Exchange JSE South Africa Johannesburg SAST +2 09:00 17:00 No 07:00 15:00 No
Turkey Borsa Istanbul ISE Turkey Istanbul TRT +3 10:00 18:00[7][8] 13:00–14:00 09:00 16:00 11:00–12:00
Germany Frankfurt Stock Exchange (Xetra) FSX Germany Frankfurt CET +1 Mar–Oct 08:00 20:00[notes 5] No 07:00 19:00 No
Germany Eurex Exchange EUREX Germany Eschborn CET +1 Mar–Oct 08:00 22:00 No 07:00 21:00 No
Poland Warsaw Stock Exchange GPW Poland Warsaw CET +1 Mar–Oct 09:00 17:00[9] No 08:00 16:00 No
Austria Wiener Börse AG VSE Austria Vienna CET +1 Mar–Oct 08:55 17:35 No 07:55 16:35 No
Hungary Budapest Stock Exchange BSE Hungary Budapest CET +1 Mar–Oct 09:00 17:00[10] No 08:00 16:00 No
France Euronext Paris EPA France Paris CET +1 Mar–Oct 09:00 17:30 No 08:00 16:30 No
Morocco Casablanca Stock Exchange CSE Maroc Casablanca CET +0 Mar–Oct 08:10 15:55 No 08:10 15:55 No
Switzerland Swiss Exchange SIX Switzerland Zurich CET +1 Mar–Oct 09:00 17:30 No 08:00 16:30 No
Switzerland Berne eXchange BX Switzerland Berne CET +1 Mar–Oct 09:00 16:30 No 08:00 15:30 No
Spain Spanish Stock Exchange BME Spain Madrid CET +1 Mar–Oct 09:00 17:30 No 08:00 16:30 No
Italy Milan Stock Exchange MTA Italy Milan CET +1 Mar–Oct 09:00 17:35 No 08:00 16:35 No
Netherlands Euronext Amsterdam AMS Netherlands Amsterdam CET +1 Mar–Oct 09:00 17:40 No 08:00 16:40 No
Luxembourg Luxembourg Stock Exchange LuxSE Luxembourg Luxembourg City CET +1 Mar–Oct 09:00 17:35 No 08:00 16:35 No
Sweden Stockholm Stock Exchange OMX Sweden Stockholm CET +1 Mar–Oct 09:00 17:30 No 08:00 16:30 No
Norway Oslo Stock Exchange OSE Norway Oslo CET +1 Mar–Oct 09:00 16:30 No 08:00 15:30 No
Denmark Copenhagen Stock Exchange CSE Denmark Copenhagen CET +1 Mar–Oct 09:00 17:00 No 08:00 16:00 No
Malta Malta Stock Exchange MSE Malta Valletta CET +1 Mar–Oct 09:30 12:30[11] No 08:30 11:30 No
Nigeria Nigerian Stock Exchange NSE Nigeria Lagos WAT +1 10:00 16:00 No 09:00 15:00 No
United Kingdom London Stock Exchange LSE United Kingdom London GMT +0 Mar–Oct 08:00 16:30 No 08:00 16:30 No
Portugal Euronext Lisbon PSI Portugal Lisbon GMT +0 Mar–Oct 08:00 16:30 No 08:00 16:30 No
Republic of Ireland Irish Stock Exchange ISE Ireland Dublin GMT +0 Mar–Oct 08:00 16:30 No 08:00 16:30 No
Brazil Bolsa de Valores de São Paulo Bovespa Brazil São Paulo BRT −3 Oct–Feb 10:00 17:30 No 13:00 20:00 No
Argentina Buenos Aires Stock Exchange BCBA Argentina Buenos Aires ART −3 11:00 17:00 No 14:00 20:00 No
United States New York Stock Exchange NYSE United States New York EST −5 Mar–Nov 09:30 16:00[12] No 14:30 21:00 No
United States NASDAQ NASDAQ United States New York EST −5 Mar–Nov 09:30 16:00[13] No 14:30 21:00 No
Canada Toronto Stock Exchange TSX Canada Toronto EST −5 Mar–Nov 09:30 16:00 No 14:30 21:00 No
Mexico Mexican Stock Exchange BMV Mexico Mexico City CST −6 Apr–Oct 08:30 15:00 No 14:30 21:00 No
Jamaica Jamaica Stock Exchange JSE Jamaica Kingston EST −5 09:00 13:00[14] No 14:00 17:00 No

Source: List of stock exchange trading hours, https://en.wikipedia.org/w/index.php?title=List_of_stock_exchange_trading_hours&oldid=862465180 (last visited Nov. 23, 2018).

 

 

Handelszeiten

Börsen sind an Handelstagen geöffnet. Bei den Handelszeiten (oder Börsenzeiten) an allen Börsen wird unterschieden zwischen dem Parketthandel und dem Computerhandel (wie zum Beispiel Xetra). Kleinere Börsen verfügen oftmals nur über den Parketthandel. Der Parketthandel beginnt an den Börsen Frankfurt und Stuttgart um 08:00 Uhr Ortszeit (bezogen auf Deutschland), an den Börsen Berlin, Düsseldorf und München ebenfalls um 08:00 Uhr, er endet um 20:00 Uhr Ortszeit und in Stuttgart um 22:00 Uhr. Der Xetra beginnt um 9:00 Uhr und endet bereits um 17:30 Uhr Ortszeit. Die Handelszeiten der NASDAQ, der größten elektronischen Börse in den USA, und der NYSE sind von 9:30 bis 16:00 New Yorker Ortszeit (EST), was 15:30 bis 22:00 Uhr deutscher Zeit (MEZ) entspricht.

Die Tokioter Börse hat ihre Handelszeiten von 9:00 Uhr bis 11:30 Uhr und 12:30 Uhr bis 15:00 Uhr Ortszeit (entspricht 1:00 Uhr bis 3:30 Uhr und 4:30 Uhr bis 7:00 Uhr MEZ).

Quelle: Seite „Börse“. In: Wikipedia, Die freie Enzyklopädie. Bearbeitungsstand: 29. September 2018, 11:37 UTC. URL: https://de.wikipedia.org/w/index.php?title=B%C3%B6rse&oldid=181331780 (Abgerufen: 23. November 2018, 10:35 UTC)

Wireless power transfer

From Wikipedia, the free encyclopedia

Inductive charging pad for LG smartphone, using the Qi system, an example of near-field wireless transfer. When the phone is set on the pad, a coil in the pad creates a magnetic field[1] which induces a current in another coil, in the phone, charging its battery.

Wireless power transfer (WPT), wireless power transmission, wireless energy transmission, or electromagnetic power transfer is the transmission of electrical energy without wires. Wireless power transmission technologies use time-varying electric, magnetic, or electromagnetic fields. Wireless transmission is useful to power electrical devices where interconnecting wires are inconvenient, hazardous, or are not possible.

Wireless power techniques mainly fall into two categories, non-radiative and radiative. In near field or non-radiative techniques, power is transferred by magnetic fields using inductive coupling between coils of wire, or by electric fields using capacitive coupling between metal electrodes. Inductive coupling is the most widely used wireless technology; its applications include charging handheld devices like phones and electric toothbrushes, RFID tags, and chargers for implantable medical devices[2] like artificial cardiac pacemakers, or electric vehicles.

In far-field or radiative techniques, also called power beaming, power is transferred by beams of electromagnetic radiation, like microwaves or laser beams. These techniques can transport energy longer distances but must be aimed at the receiver. Proposed applications for this type are solar power satellites, and wireless powered drone aircraft.[3][4][5]

An important issue associated with all wireless power systems is limiting the exposure of people and other living things to potentially injurious electromagnetic fields.[6][7]

Overview

Generic block diagram of a wireless power system

There are a number of different technologies for transmitting energy by means of electromagnetic fields.[8][9][10] The technologies, listed in the table below, differ in the distance over which they can transfer power efficiently, whether the transmitter must be aimed (directed) at the receiver, and in the type of electromagnetic energy they use: time varying electric fields, magnetic fields, radio waves, microwaves, infrared or visible light waves.[11]

In general a wireless power system consists of a “transmitter” connected to a source of power such as a mains power line, which converts the power to a time-varying electromagnetic field, and one or more “receiver” devices which receive the power and convert it back to DC or AC electric current which is used by an electrical load.[8][11] At the transmitter the input power is converted to an oscillating electromagnetic field by some type of “antenna” device. The word “antenna” is used loosely here; it may be a coil of wire which generates a magnetic field, a metal plate which generates an electric field, an antenna which radiates radio waves, or a laser which generates light. A similar antenna or coupling device at the receiver converts the oscillating fields to an electric current. An important parameter that determines the type of waves is the frequency, which determines the wavelength.

Wireless power uses the same fields and waves as wireless communication devices like radio,[12][13] another familiar technology that involves electrical energy transmitted without wires by electromagnetic fields, used in cellphones, radio and television broadcasting, and WiFi. In radio communication the goal is the transmission of information, so the amount of power reaching the receiver is not so important, as long as it is sufficient that the information can be received intelligibly.[9][12][13] In wireless communication technologies only tiny amounts of power reach the receiver. In contrast, with wireless power the amount of energy received is the important thing, so the efficiency (fraction of transmitted energy that is received) is the more significant parameter.[9] For this reason, wireless power technologies are likely to be more limited by distance than wireless communication technologies.

These are the different wireless power technologies:[8][11][14][15][16]

Technology Range[17] Directivity[11] Frequency Antenna devices Current and/or possible future applications
Inductive coupling Short Low Hz – MHz Wire coils Electric tooth brush and razor battery charging, induction stovetops and industrial heaters.
Resonant inductive coupling Mid- Low kHz – GHz Tuned wire coils, lumped element resonators Charging portable devices (Qi), biomedical implants, electric vehicles, powering buses, trains, MAGLEV, RFID, smartcards.
Capacitive coupling Short Low kHz – MHz Metal plate electrodes Charging portable devices, power routing in large-scale integrated circuits, Smartcards.
Magnetodynamic coupling[15] Short N.A. Hz Rotating magnets Charging electric vehicles, buses, biomedical implants.
Microwaves Long High GHz Parabolic dishes, phased arrays, rectennas Solar power satellite, powering drone aircraft, charging wireless devices
Light waves Long High ≥THz Lasers, photocells, lenses Powering drone aircraft, powering space elevator climbers.

Field regions

Electric and magnetic fields are created by charged particles in matter such as electrons. A stationary charge creates an electrostatic field in the space around it. A steady current of charges (direct current, DC) creates a static magnetic field around it. The above fields contain energy, but cannot carry power because they are static. However time-varying fields can carry power.[18] Accelerating electric charges, such as are found in an alternating current (AC) of electrons in a wire, create time-varying electric and magnetic fields in the space around them. These fields can exert oscillating forces on the electrons in a receiving “antenna”, causing them to move back and forth. These represent alternating current which can be used to power a load.

The oscillating electric and magnetic fields surrounding moving electric charges in an antenna device can be divided into two regions, depending on distance Drange from the antenna.[8][11][12][14][19][20] [21] The boundary between the regions is somewhat vaguely defined.[11] The fields have different characteristics in these regions, and different technologies are used for transferring power:

  • Near-field or nonradiative region – This means the area within about 1 wavelength (λ) of the antenna.[8][19][20] In this region the oscillating electric and magnetic fields are separate[12] and power can be transferred via electric fields by capacitive coupling (electrostatic induction) between metal electrodes, or via magnetic fields by inductive coupling (electromagnetic induction) between coils of wire.[9][11][12][14] These fields are not radiative,[20] meaning the energy stays within a short distance of the transmitter.[22] If there is no receiving device or absorbing material within their limited range to “couple” to, no power leaves the transmitter.[22] The range of these fields is short, and depends on the size and shape of the “antenna” devices, which are usually coils of wire. The fields, and thus the power transmitted, decrease exponentially with distance,[19][21][23] so if the distance between the two “antennas” Drange is much larger than the diameter of the “antennas” Dant very little power will be received. Therefore, these techniques cannot be used for long range power transmission.
Resonance, such as resonant inductive coupling, can increase the coupling between the antennas greatly, allowing efficient transmission at somewhat greater distances,[8][12][14][19][24][25] although the fields still decrease exponentially. Therefore the range of near-field devices is conventionally divided into two categories:

  • Short range – up to about one antenna diameter: Drange ≤ Dant.[22][24][26] This is the range over which ordinary nonresonant capacitive or inductive coupling can transfer practical amounts of power.
  • Mid-range – up to 10 times the antenna diameter: Drange ≤ 10 Dant.[24][25][26][27] This is the range over which resonant capacitive or inductive coupling can transfer practical amounts of power.
  • Far-field or radiative region – Beyond about 1 wavelength (λ) of the antenna, the electric and magnetic fields are perpendicular to each other and propagate as an electromagnetic wave; examples are radio waves, microwaves, or light waves.[8][14][19] This part of the energy is radiative,[20] meaning it leaves the antenna whether or not there is a receiver to absorb it. The portion of energy which does not strike the receiving antenna is dissipated and lost to the system. The amount of power emitted as electromagnetic waves by an antenna depends on the ratio of the antenna’s size Dant to the wavelength of the waves λ,[28] which is determined by the frequency: λ = c/f. At low frequencies f where the antenna is much smaller than the size of the waves, Dant << λ, very little power is radiated. Therefore the near-field devices above, which use lower frequencies, radiate almost none of their energy as electromagnetic radiation. Antennas about the same size as the wavelength Dant ≈ λ such as monopole or dipole antennas, radiate power efficiently, but the electromagnetic waves are radiated in all directions (omnidirectionally), so if the receiving antenna is far away, only a small amount of the radiation will hit it.[20][24] Therefore, these can be used for short range, inefficient power transmission but not for long range transmission.[29]
However, unlike fields, electromagnetic radiation can be focused by reflection or refraction into beams. By using a high-gain antenna or optical system which concentrates the radiation into a narrow beam aimed at the receiver, it can be used for long range power transmission.[24][29] From the Rayleigh criterion, to produce the narrow beams necessary to focus a significant amount of the energy on a distant receiver, an antenna must be much larger than the wavelength of the waves used: Dant >> λ = c/f.[30] Practical beam power devices require wavelengths in the centimeter region or below, corresponding to frequencies above 1 GHz, in the microwave range or above.[8]

Near-field (nonradiative) techniques

At large relative distance, the near-field components of electric and magnetic fields are approximately quasi-static oscillating dipole fields. These fields decrease with the cube of distance: (Drange/Dant)−3[21][31] Since power is proportional to the square of the field strength, the power transferred decreases as (Drange/Dant)−6.[12][23][32][33] or 60 dB per decade. In other words, if far apart, doubling the distance between the two antennas causes the power received to decrease by a factor of 26 = 64. As a result, inductive and capacitive coupling can only be used for short-range power transfer, within a few times the diameter of the antenna device Dant. Unlike in a radiative system where the maximum radiation occurs when the dipole antennas are oriented transverse to the direction of propagation, with dipole fields the maximum coupling occurs when the dipoles are oriented longitudinally.

Inductive coupling

Generic block diagram of an inductive wireless power system
(left) Modern inductive power transfer, an electric toothbrush charger. A coil in the stand produces a magnetic field, inducing an alternating current in a coil in the toothbrush, which is rectified to charge the batteries.
(right) A light bulb powered wirelessly by induction, in 1910.

In inductive coupling (electromagnetic induction[14][34] or inductive power transfer, IPT), power is transferred between coils of wire by a magnetic field.[12] The transmitter and receiver coils together form a transformer[12][14] (see diagram). An alternating current (AC) through the transmitter coil (L1) creates an oscillating magnetic field (B) by Ampere’s law. The magnetic field passes through the receiving coil (L2), where it induces an alternating EMF (voltage) by Faraday’s law of induction, which creates an alternating current in the receiver.[9][34] The induced alternating current may either drive the load directly, or be rectified to direct current (DC) by a rectifier in the receiver, which drives the load. A few systems, such as electric toothbrush charging stands, work at 50/60 Hz so AC mains current is applied directly to the transmitter coil, but in most systems an electronic oscillator generates a higher frequency AC current which drives the coil, because transmission efficiency improves with frequency.[34]

Inductive coupling is the oldest and most widely used wireless power technology, and virtually the only one so far which is used in commercial products. It is used in inductive charging stands for cordless appliances used in wet environments such as electric toothbrushes[14] and shavers, to reduce the risk of electric shock.[35] Another application area is “transcutaneous” recharging of biomedical prosthetic devices implanted in the human body, such as cardiac pacemakers and insulin pumps, to avoid having wires passing through the skin.[36][37] It is also used to charge electric vehicles such as cars and to either charge or power transit vehicles like buses and trains.[14][16]

However the fastest growing use is wireless charging pads to recharge mobile and handheld wireless devices such as laptop and tablet computers, cellphones, digital media players, and video game controllers.[16]

The power transferred increases with frequency[34] and the mutual inductance M {\displaystyle M} M between the coils,[9] which depends on their geometry and the distance D range {\displaystyle D_{\text{range}}} {\displaystyle D_{\text{range}}} between them. A widely used figure of merit is the coupling coefficient k = M / L 1 L 2 {\displaystyle k\;=\;M/{\sqrt {L_{1}L_{2}}}} {\displaystyle k\;=\;M/{\sqrt {L_{1}L_{2}}}}.[34][38] This dimensionless parameter is equal to the fraction of magnetic flux through the transmitter coil L 1 {\displaystyle L1} L1 that passes through the receiver coil L 2 {\displaystyle L2} L2 when L2 is open circuited. If the two coils are on the same axis and close together so all the magnetic flux from L 1 {\displaystyle L1} L1 passes through L 2 {\displaystyle L2} L2, k = 1 {\displaystyle k=1} k=1 and the link efficiency approaches 100%. The greater the separation between the coils, the more of the magnetic field from the first coil misses the second, and the lower k {\displaystyle k} k and the link efficiency are, approaching zero at large separations.[34] The link efficiency and power transferred is roughly proportional to k 2 {\displaystyle k^{2}} k^2.[34] In order to achieve high efficiency, the coils must be very close together, a fraction of the coil diameter D ant {\displaystyle D_{\text{ant}}} {\displaystyle D_{\text{ant}}},[34] usually within centimeters,[29] with the coils’ axes aligned. Wide, flat coil shapes are usually used, to increase coupling.[34] Ferrite “flux confinement” cores can confine the magnetic fields, improving coupling and reducing interference to nearby electronics,[34][36] but they are heavy and bulky so small wireless devices often use air-core coils.

Ordinary inductive coupling can only achieve high efficiency when the coils are very close together, usually adjacent. In most modern inductive systems resonant inductive coupling (described below) is used, in which the efficiency is increased by using resonant circuits.[20][25][34][39] This can achieve high efficiencies at greater distances than nonresonant inductive coupling.

Prototype inductive electric car charging system at 2011 Tokyo Auto Show
Powermat inductive charging spots in a coffee shop. Customers can set their phones and computers on them to recharge.
Wireless powered access card.

Resonant inductive coupling

Diagram of the resonant inductive wireless power system demonstrated by Marin Soljačić‘s MIT team in 2007. The resonant circuits were coils of copper wire which resonated with their internal capacitance (dotted capacitors) at 10 MHz. Power was coupled into the transmitter resonator, and out of the receiver resonator into the rectifier, by small coils which also served for impedance matching.

Resonant inductive coupling (electrodynamic coupling,[14] strongly coupled magnetic resonance[24]) is a form of inductive coupling in which power is transferred by magnetic fields (B, green) between two resonant circuits (tuned circuits), one in the transmitter and one in the receiver (see diagram, right).[12][14][20][35][39] Each resonant circuit consists of a coil of wire connected to a capacitor, or a self-resonant coil or other resonator with internal capacitance. The two are tuned to resonate at the same resonant frequency. The resonance between the coils can greatly increase coupling and power transfer, analogously to the way a vibrating tuning fork can induce sympathetic vibration in a distant fork tuned to the same pitch.

Nikola Tesla first discovered resonant coupling during his pioneering experiments in wireless power transfer around the turn of the 20th century,[40][41][42] but the possibilities of using resonant coupling to increase transmission range has only recently been explored.[43] In 2007 a team led by Marin Soljačić at MIT used two coupled tuned circuits each made of a 25 cm self-resonant coil of wire at 10 MHz to achieve the transmission of 60 W of power over a distance of 2 meters (6.6 ft) (8 times the coil diameter) at around 40% efficiency.[14][24][35][41][44] Soljačić founded the company WiTricity (the same name the team used for the technology) which is attempting to commercialize the technology.

The concept behind the WiTricity resonant inductive coupling system is that high Q factor resonators (Highly Resonant) exchange energy at a much higher rate than they lose energy due to internal damping.[24] Therefore, by using resonance, the same amount of power can be transferred at greater distances, using the much weaker magnetic fields out in the peripheral regions (“tails”) of the near fields (these are sometimes called evanescent fields[24]). Resonant inductive coupling can achieve high efficiency at ranges of 4 to 10 times the coil diameter (Dant).[25][26][27] This is called “mid-range” transfer,[26] in contrast to the “short range” of nonresonant inductive transfer, which can achieve similar efficiencies only when the coils are adjacent. Another advantage is that resonant circuits interact with each other so much more strongly than they do with nonresonant objects that power losses due to absorption in stray nearby objects are negligible.[20][24]

A drawback of resonant coupling theory is that at close ranges when the two resonant circuits are tightly coupled, the resonant frequency of the system is no longer constant but “splits” into two resonant peaks,[45][46][47] so the maximum power transfer no longer occurs at the original resonant frequency and the oscillator frequency must be tuned to the new resonance peak.[25][48] The case of using such a shifted peak is called “Single resonant”.[49] The “Single resonant” systems have also been used, in which only the secondary is a tuned circuit.[50] The principle of this phenomenon is also called “(Magnetic) phase synchronization”[51][52] and already started practical application for AGV in Japan from around 1993.[53] And now, the concept of Highly Resonant presented by researcher of MIT is applied only to the secondary side resonator, and high efficiency wide gap high power wireless power transfer system is realized and it is used for induction current collector of SCMaglev.[54][50]

Resonant technology is currently being widely incorporated in modern inductive wireless power systems.[34] One of the possibilities envisioned for this technology is area wireless power coverage. A coil in the wall or ceiling of a room might be able to wirelessly power lights and mobile devices anywhere in the room, with reasonable efficiency.[35] An environmental and economic benefit of wirelessly powering small devices such as clocks, radios, music players and remote controls is that it could drastically reduce the 6 billion batteries disposed of each year, a large source of toxic waste and groundwater contamination.[29]

Capacitive coupling

In capacitive coupling (electrostatic induction), the conjugate of inductive coupling, energy is transmitted by electric fields[9] between electrodes such as metal plates. The transmitter and receiver electrodes form a capacitor, with the intervening space as the dielectric.[9][12][14][36][55] An alternating voltage generated by the transmitter is applied to the transmitting plate, and the oscillating electric field induces an alternating potential on the receiver plate by electrostatic induction,[9][55] which causes an alternating current to flow in the load circuit. The amount of power transferred increases with the frequency[55] the square of the voltage, and the capacitance between the plates, which is proportional to the area of the smaller plate and (for short distances) inversely proportional to the separation.[9]

Capacitive wireless power systems
Bipolar coupling
Unipolar coupling

Capacitive coupling has only been used practically in a few low power applications, because the very high voltages on the electrodes required to transmit significant power can be hazardous,[12][14] and can cause unpleasant side effects such as noxious ozone production. In addition, in contrast to magnetic fields,[24] electric fields interact strongly with most materials, including the human body, due to dielectric polarization.[36] Intervening materials between or near the electrodes can absorb the energy, in the case of humans possibly causing excessive electromagnetic field exposure.[12] However capacitive coupling has a few advantages over inductive coupling. The field is largely confined between the capacitor plates, reducing interference, which in inductive coupling requires heavy ferrite “flux confinement” cores.[9][36] Also, alignment requirements between the transmitter and receiver are less critical.[9][12][55] Capacitive coupling has recently been applied to charging battery powered portable devices.[56] and is being considered as a means of transferring power between substrate layers in integrated circuits.[57]

Two types of circuit have been used:

  • Bipolar design:[58][59] In this type of circuit, there are two transmitter plates and two receiver plates. Each transmitter plate is coupled to a receiver plate. The transmitter oscillator drives the transmitter plates in opposite phase (180° phase difference) by a high alternating voltage, and the load is connected between the two receiver plates. The alternating electric fields induce opposite phase alternating potentials in the receiver plates, and this “push-pull” action causes current to flow back and forth between the plates through the load. A disadvantage of this configuration for wireless charging is that the two plates in the receiving device must be aligned face to face with the charger plates for the device to work.[10]
  • Unipolar design:[9][55][59] In this type of circuit, the transmitter and receiver have only one active electrode, and either the ground or a large passive electrode serves as the return path for the current. The transmitter oscillator is connected between an active and a passive electrode. The load is also connected between an active and a passive electrode. The electric field produced by the transmitter induces alternating charge displacement in the load dipole through electrostatic induction.[60]

Resonant capacitive coupling

Resonance can also be used with capacitive coupling to extend the range. At the turn of the 20th century, Nikola Tesla did the first experiments with both resonant inductive and capacitive coupling.

Magnetodynamic coupling

In this method, power is transmitted between two rotating armatures, one in the transmitter and one in the receiver, which rotate synchronously, coupled together by a magnetic field generated by permanent magnets on the armatures.[15] The transmitter armature is turned either by or as the rotor of an electric motor, and its magnetic field exerts torque on the receiver armature, turning it. The magnetic field acts like a mechanical coupling between the armatures.[15] The receiver armature produces power to drive the load, either by turning a separate electric generator or by using the receiver armature itself as the rotor in a generator.

This device has been proposed as an alternative to inductive power transfer for noncontact charging of electric vehicles.[15] A rotating armature embedded in a garage floor or curb would turn a receiver armature in the underside of the vehicle to charge its batteries.[15] It is claimed that this technique can transfer power over distances of 10 to 15 cm (4 to 6 inches) with high efficiency, over 90%.[15][61] Also, the low frequency stray magnetic fields produced by the rotating magnets produce less electromagnetic interference to nearby electronic devices than the high frequency magnetic fields produced by inductive coupling systems. A prototype system charging electric vehicles has been in operation at University of British Columbia since 2012. Other researchers, however, claim that the two energy conversions (electrical to mechanical to electrical again) make the system less efficient than electrical systems like inductive coupling.[15]

Far-field (radiative) techniques

Far field methods achieve longer ranges, often multiple kilometer ranges, where the distance is much greater than the diameter of the device(s). The main reason for longer ranges with radio wave and optical devices is the fact that electromagnetic radiation in the far-field can be made to match the shape of the receiving area (using high directivity antennas or well-collimated laser beams). The maximum directivity for antennas is physically limited by diffraction.

In general, visible light (from lasers) and microwaves (from purpose-designed antennas) are the forms of electromagnetic radiation best suited to energy transfer.

The dimensions of the components may be dictated by the distance from transmitter to receiver, the wavelength and the Rayleigh criterion or diffraction limit, used in standard radio frequency antenna design, which also applies to lasers. Airy’s diffraction limit is also frequently used to determine an approximate spot size at an arbitrary distance from the aperture. Electromagnetic radiation experiences less diffraction at shorter wavelengths (higher frequencies); so, for example, a blue laser is diffracted less than a red one.

The Rayleigh criterion dictates that any radio wave, microwave or laser beam will spread and become weaker and diffuse over distance; the larger the transmitter antenna or laser aperture compared to the wavelength of radiation, the tighter the beam and the less it will spread as a function of distance (and vice versa). Smaller antennae also suffer from excessive losses due to side lobes. However, the concept of laser aperture considerably differs from an antenna. Typically, a laser aperture much larger than the wavelength induces multi-moded radiation and mostly collimators are used before emitted radiation couples into a fiber or into space.

Ultimately, beamwidth is physically determined by diffraction due to the dish size in relation to the wavelength of the electromagnetic radiation used to make the beam.

Microwave power beaming can be more efficient than lasers, and is less prone to atmospheric attenuation caused by dust or water vapor.

Here, the power levels are calculated by combining the above parameters together, and adding in the gains and losses due to the antenna characteristics and the transparency and dispersion of the medium through which the radiation passes. That process is known as calculating a link budget.

Microwaves

An artist’s depiction of a solar satellite that could send electric energy by microwaves to a space vessel or planetary surface.

Power transmission via radio waves can be made more directional, allowing longer-distance power beaming, with shorter wavelengths of electromagnetic radiation, typically in the microwave range.[62] A rectenna may be used to convert the microwave energy back into electricity. Rectenna conversion efficiencies exceeding 95% have been realized. Power beaming using microwaves has been proposed for the transmission of energy from orbiting solar power satellites to Earth and the beaming of power to spacecraft leaving orbit has been considered.[63][64]

Power beaming by microwaves has the difficulty that, for most space applications, the required aperture sizes are very large due to diffraction limiting antenna directionality. For example, the 1978 NASA study of solar power satellites required a 1-kilometre-diameter (0.62 mi) transmitting antenna and a 10-kilometre-diameter (6.2 mi) receiving rectenna for a microwave beam at 2.45 GHz.[65] These sizes can be somewhat decreased by using shorter wavelengths, although short wavelengths may have difficulties with atmospheric absorption and beam blockage by rain or water droplets. Because of the “thinned-array curse“, it is not possible to make a narrower beam by combining the beams of several smaller satellites.

For earthbound applications, a large-area 10 km diameter receiving array allows large total power levels to be used while operating at the low power density suggested for human electromagnetic exposure safety. A human safe power density of 1 mW/cm2 distributed across a 10 km diameter area corresponds to 750 megawatts total power level. This is the power level found in many modern electric power plants.

Following World War II, which saw the development of high-power microwave emitters known as cavity magnetrons, the idea of using microwaves to transfer power was researched. By 1964, a miniature helicopter propelled by microwave power had been demonstrated.[66]

Japanese researcher Hidetsugu Yagi also investigated wireless energy transmission using a directional array antenna that he designed. In February 1926, Yagi and his colleague Shintaro Uda published their first paper on the tuned high-gain directional array now known as the Yagi antenna. While it did not prove to be particularly useful for power transmission, this beam antenna has been widely adopted throughout the broadcasting and wireless telecommunications industries due to its excellent performance characteristics.[67]

Wireless high power transmission using microwaves is well proven. Experiments in the tens of kilowatts have been performed at Goldstone in California in 1975[68][69][70] and more recently (1997) at Grand Bassin on Reunion Island.[71] These methods achieve distances on the order of a kilometer.

Under experimental conditions, microwave conversion efficiency was measured to be around 54%.[72]

A change to 24 GHz has been suggested as microwave emitters similar to LEDs have been made with very high quantum efficiencies using negative resistance, i.e., Gunn or IMPATT diodes, and this would be viable for short range links.

In 2013, inventor Hatem Zeine demonstrated how wireless power transmission using phased array antennas can deliver electrical power up to 30 feet. It uses the same radio frequencies as WiFi.[73][74]

In 2015, researchers at the University of Washington introduced power over Wi-Fi, which trickle-charges batteries and powered battery-free cameras and temperature sensors using transmissions from Wi-Fi routers.[75][76] Wi-Fi signals were shown to power battery-free temperature and camera sensors at ranges of up to 20 feet. It was also shown that Wi-Fi can be used to wirelessly trickle-charge nickel–metal hydride and lithium-ion coin-cell batteries at distances of up to 28 feet.

In 2017, the Federal Communication Commission (FCC) certified the first mid-field radio frequency (RF) transmitter of wireless power.[77]

Lasers

A laser beam centered on a panel of photovoltaic cells provides enough power to a lightweight model airplane for it to fly.

In the case of electromagnetic radiation closer to the visible region of the spectrum (tens of micrometers to tens of nanometers), power can be transmitted by converting electricity into a laser beam that is then pointed at a photovoltaic cell.[78][79] This mechanism is generally known as ‘power beaming’ because the power is beamed at a receiver that can convert it to electrical energy. At the receiver, special photovoltaic laser power converters which are optimized for monochromatic light conversion are applied.[80]

Advantages compared to other wireless methods are:[81]

  • Collimated monochromatic wavefront propagation allows narrow beam cross-section area for transmission over large distances.
  • Compact size: solid state lasers fit into small products.
  • No radio-frequency interference to existing radio communication such as Wi-Fi and cell phones.
  • Access control: only receivers hit by the laser receive power.

Drawbacks include:

  • Laser radiation is hazardous. Low power levels can blind humans and other animals. High power levels can kill through localized spot heating.
  • Conversion between electricity and light is limited. Photovoltaic cells achieve 40%–50% efficiency.[82] (The conversion efficiency of laser light into electricity is much higher than that of sun light into electricity).
  • Atmospheric absorption, and absorption and scattering by clouds, fog, rain, etc., causes up to 100% losses.
  • Requires a direct line of sight with the target. (Instead of being beamed directly onto the receiver, the laser light can also be guided by an optical fiber. Then one speaks of power-over-fiber technology.)

Laser ‘powerbeaming’ technology was explored in military weapons[83][84][85] and aerospace[86][87] applications. Also, it is applied for powering of various kinds of sensors in industrial environment. Lately, it is developed for powering commercial and consumer electronics. Wireless energy transfer systems using lasers for consumer space have to satisfy laser safety requirements standardized under IEC 60825.[citation needed]

Other details include propagation,[88] and the coherence and the range limitation problem.[89]

Geoffrey Landis[90][91][92] is one of the pioneers of solar power satellites[93] and laser-based transfer of energy especially for space and lunar missions. The demand for safe and frequent space missions has resulted in proposals for a laser-powered space elevator.[94][95]

NASA’s Dryden Flight Research Center demonstrated a lightweight unmanned model plane powered by a laser beam.[96] This proof-of-concept demonstrates the feasibility of periodic recharging using the laser beam system.

Atmospheric plasma channel coupling

In atmospheric plasma channel coupling, energy is transferred between two electrodes by electrical conduction through ionized air.[97] When an electric field gradient exists between the two electrodes, exceeding 34 kilovolts per centimeter at sea level atmospheric pressure, an electric arc occurs.[98] This atmospheric dielectric breakdown results in the flow of electric current along a random trajectory through an ionized plasma channel between the two electrodes. An example of this is natural lightning, where one electrode is a virtual point in a cloud and the other is a point on Earth. Laser Induced Plasma Channel (LIPC) research is presently underway using ultrafast lasers to artificially promote development of the plasma channel through the air, directing the electric arc, and guiding the current across a specific path in a controllable manner.[99] The laser energy reduces the atmospheric dielectric breakdown voltage and the air is made less insulating by superheating, which lowers the density ( p {\displaystyle p} p) of the filament of air.[100]

This new process is being explored for use as a laser lightning rod and as a means to trigger lightning bolts from clouds for natural lightning channel studies,[101] for artificial atmospheric propagation studies, as a substitute for conventional radio antennas,[102] for applications associated with electric welding and machining,[103][104] for diverting power from high-voltage capacitor discharges, for directed-energy weapon applications employing electrical conduction through a ground return path,[105][106][107][108] and electronic jamming.[109]

Energy harvesting

In the context of wireless power, energy harvesting, also called power harvesting or energy scavenging, is the conversion of ambient energy from the environment to electric power, mainly to power small autonomous wireless electronic devices.[110] The ambient energy may come from stray electric or magnetic fields or radio waves from nearby electrical equipment, light, thermal energy (heat), or kinetic energy such as vibration or motion of the device.[110] Although the efficiency of conversion is usually low and the power gathered often minuscule (milliwatts or microwatts),[110] it can be adequate to run or recharge small micropower wireless devices such as remote sensors, which are proliferating in many fields.[111][110] This new technology is being developed to eliminate the need for battery replacement or charging of such wireless devices, allowing them to operate completely autonomously.

History

19th century developments and dead ends

The 19th century saw many developments of theories, and counter-theories on how electrical energy might be transmitted. In 1826 André-Marie Ampère found Ampère’s circuital law showing that electric current produces a magnetic field.[112] Michael Faraday described in 1831 with his law of induction the electromotive force driving a current in a conductor loop by a time-varying magnetic flux. The fact that electrical energy could by transmitted at a distance without wires was actually observed by many inventors and experimenters,[113][114][115] but lack of a coherent theory attributed these phenomena vaguely to electromagnetic induction.[116] A concise explanation of these phenomena would come from the 1860s Maxwell’s equations[16][39] by James Clerk Maxwell, establishing a theory that unified electricity and magnetism to electromagnetism, predicting the existence of electromagnetic waves as the “wireless” carrier of electromagnetic energy. Around 1884 John Henry Poynting defined the Poynting vector and gave Poynting’s theorem, which describe the flow of power across an area within electromagnetic radiation and allow for a correct analysis of wireless power transfer systems.[16][39][117] This was followed on by Heinrich Rudolf Hertz‘ 1888 validation of the theory, which included the evidence for radio waves.[117]

During the same period two schemes of wireless signaling were put forward by William Henry Ward (1871) and Mahlon Loomis (1872) that were based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude.[118][119] Both inventors’ patents noted this layer connected with a return path using “Earth currents”‘ would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries, and could also be used for lighting, heat, and motive power.[120][121] A more practical demonstration of wireless transmission via conduction came in Amos Dolbear‘s 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.[122]

Tesla

Tesla demonstrating wireless transmission by “electrostatic induction” during an 1891 lecture at Columbia College.  The two metal sheets are connected to a Tesla coil oscillator, which applies high-voltage radio frequency alternating current.  An oscillating electric field between the sheets ionizes the low-pressure gas in the two long Geissler tubes in his hands, causing them to glow in a manner similar to neon tubes.

After 1890 inventor Nikola Tesla experimented with transmitting power by inductive and capacitive coupling using spark-excited radio frequency resonant transformers, now called Tesla coils, which generated high AC voltages.[39][41][123] Early on he attempted to develop a wireless lighting system based on near-field inductive and capacitive coupling[41] and conducted a series of public demonstrations where he lit Geissler tubes and even incandescent light bulbs from across a stage.[41][123][124] He found he could increase the distance at which he could light a lamp by using a receiving LC circuit tuned to resonance with the transmitter’s LC circuit.[40] using resonant inductive coupling.[41][42] Tesla failed to make a commercial product out of his findings[125] but his resonant inductive coupling method is now widely used in electronics and is currently being applied to short-range wireless power systems.[41][126]

(left) Experiment in resonant inductive transfer by Tesla at Colorado Springs 1899. The coil is in resonance with Tesla’s magnifying transmitter nearby, powering the light bulb at bottom. (right) Tesla’s unsuccessful Wardenclyffe power station.

Tesla went on to develop a wireless power distribution system that he hoped would be capable of transmit power long distance directly into homes and factories. Early on he seemed to borrow from the ideas of Mahlon Loomis,[127][128] proposing a system composed of balloons to suspend transmitting and receiving electrodes in the air above 30,000 feet (9,100 m) in altitude, where he thought the pressure would allow him to send high voltages (millions of volts) long distances. To further study the conductive nature of low pressure air he set up a test facility at high altitude in Colorado Springs during 1899.[129][130][131] Experiments he conducted there with a large coil operating in the megavolts range, as well as observations he made of the electronic noise of lightning strikes, led him to conclude incorrectly[132][122] that he could use the entire globe of the Earth to conduct electrical energy. The theory included driving alternating current pulses into the Earth at its resonant frequency from a grounded Tesla coil working against an elevated capacitance to make the potential of the Earth oscillate. Tesla thought this would allow alternating current to be received with a similar capacitive antenna tuned to resonance with it at any point on Earth with very little power loss.[133][134][135] His observations also led him to believe a high voltage used in a coil at an elevation of a few hundred feet would “break the air stratum down”, eliminating the need for miles of cable hanging on balloons to create his atmospheric return circuit.[136][137] Tesla would go on the next year to propose a “World Wireless System” that was to broadcast both information and power worldwide.[138][139] In 1901, at Shoreham, New York he attempted to construct a large high-voltage wireless power station, now called Wardenclyffe Tower, but by 1904 investment dried up and the facility was never completed.

Near-field and non-radiative technologies

Inductive power transfer between nearby wire coils was the earliest wireless power technology to be developed, existing since the transformer was developed in the 1800s. Induction heating has been used since the early 1900s.[140] With the advent of cordless devices, induction charging stands have been developed for appliances used in wet environments, like electric toothbrushes and electric razors, to eliminate the hazard of electric shock. One of the earliest proposed applications of inductive transfer was to power electric locomotives. In 1892 Maurice Hutin and Maurice Leblanc patented a wireless method of powering railroad trains using resonant coils inductively coupled to a track wire at 3 kHz.[141] The first passive RFID (Radio Frequency Identification) technologies were invented by Mario Cardullo[142] (1973) and Koelle et al.[143] (1975) and by the 1990s were being used in proximity cards and contactless smartcards.

The proliferation of portable wireless communication devices such as mobile phones, tablet, and laptop computers in recent decades is currently driving the development of mid-range wireless powering and charging technology to eliminate the need for these devices to be tethered to wall plugs during charging.[144] The Wireless Power Consortium was established in 2008 to develop interoperable standards across manufacturers.[144] Its Qi inductive power standard published in August 2009 enables high efficiency charging and powering of portable devices of up to 5 watts over distances of 4 cm (1.6 inches).[145] The wireless device is placed on a flat charger plate (which can be embedded in table tops at cafes, for example) and power is transferred from a flat coil in the charger to a similar one in the device.

In 2007, a team led by Marin Soljačić at MIT used a dual resonance transmitter with a 25 cm diameter secondary tuned to 10 MHz to transfer 60 W of power to a similar dual resonance receiver over a distance of 2 meters (6.6 ft) (eight times the transmitter coil diameter) at around 40% efficiency.[41][44] In 2008 the team of Greg Leyh and Mike Kennan of Nevada Lightning Lab used a grounded dual resonance transmitter with a 57 cm diameter secondary tuned to 60 kHz and a similar grounded dual resonance receiver to transfer power through coupled electric fields with an earth return circuit over a distance of 12 meters (39 ft).[146]

Microwaves and lasers

Before World War 2, little progress was made in wireless power transmission.[147] Radio was developed for communication uses, but couldn’t be used for power transmission due to the fact that the relatively low-frequency radio waves spread out in all directions and little energy reached the receiver.[16][39][147] In radio communication, at the receiver, an amplifier intensifies a weak signal using energy from another source. For power transmission, efficient transmission required transmitters that could generate higher-frequency microwaves, which can be focused in narrow beams towards a receiver.[16][39][147][148]

The development of microwave technology during World War 2, such as the klystron and magnetron tubes and parabolic antennas[147] made radiative (far-field) methods practical for the first time, and the first long-distance wireless power transmission was achieved in the 1960s by William C. Brown.[16][39] In 1964 Brown invented the rectenna which could efficiently convert microwaves to DC power, and in 1964 demonstrated it with the first wireless-powered aircraft, a model helicopter powered by microwaves beamed from the ground.[16][147] A major motivation for microwave research in the 1970s and 80s was to develop a solar power satellite.[39][147] Conceived in 1968 by Peter Glaser, this would harvest energy from sunlight using solar cells and beam it down to Earth as microwaves to huge rectennas, which would convert it to electrical energy on the electric power grid.[16][149] In landmark 1975 experiments as technical director of a JPL/Raytheon program, Brown demonstrated long-range transmission by beaming 475 W of microwave power to a rectenna a mile away, with a microwave to DC conversion efficiency of 54%.[150] At NASA’s Jet Propulsion Laboratory he and Robert Dickinson transmitted 30 kW DC output power across 1.5 km with 2.38 GHz microwaves from a 26 m dish to a 7.3 x 3.5 m rectenna array. The incident-RF to DC conversion efficiency of the rectenna was 80%.[16][151] In 1983 Japan launched MINIX (Microwave Ionosphere Nonlinear Interaction Experiment), a rocket experiment to test transmission of high power microwaves through the ionosphere.[16]

In recent years a focus of research has been the development of wireless-powered drone aircraft, which began in 1959 with the Dept. of Defense’s RAMP (Raytheon Airborne Microwave Platform) project[147] which sponsored Brown’s research. In 1987 Canada’s Communications Research Center developed a small prototype airplane called Stationary High Altitude Relay Platform (SHARP) to relay telecommunication data between points on earth similar to a communication satellite. Powered by a rectenna, it could fly at 13 miles (21 km) altitude and stay aloft for months. In 1992 a team at Kyoto University built a more advanced craft called MILAX (MIcrowave Lifted Airplane eXperiment).

In 2003 NASA flew the first laser powered aircraft. The small model plane’s motor was powered by electricity generated by photocells from a beam of infrared light from a ground-based laser, while a control system kept the laser pointed at the plane.

See also

References

 

 

  1. Dickinson, Richard M. (1976). “Performance of a high-power 2.388 GHz receiving array in wireless power transmission over 1.54 km” (PDF). MTT-S Int’l Microwave Symposium Digest. 76: 139–141. doi:10.1109/mwsym.1976.1123672. Retrieved 9 November 2014.

Further reading

Books and articles
Patents

External links

Source: Wireless power transfer, https://en.wikipedia.org/w/index.php?title=Wireless_power_transfer&oldid=819916621 (last visited Jan. 13, 2018).

Top three most successful Forex traders ever

World's best Forex trader

Whether you are new to trading Forex or an old hand at the currency markets, you are likely to share one key aspiration:

How do I become more successful at trading in 2018?

One way to improve is to learn by example and to look at some of the most successful Forex traders in the world. In this article, you’ll learn about what the top Forex traders in the world have in common and how those strengths helped them to make huge profits.

While you may have heard statistics thrown around suggesting that the ratio of successful Forex traders to unsuccessful ones is small. There are at least a couple of reasons to be sceptical about such claims.

Firstly, hard data is hard to come by on the subject because of the decentralised, over-the-counter nature of the Forex market. But there is plenty of education material and working Forex trading strategies available to better equip your trading performance.

Second, we would expect the distribution of winners and losers to follow something of a bell-curve, meaning that there would be:

  1. very few large losers
  2. a great number of small losers
  3. a great number of small winners; and
  4. very few large winners.

The data that is available from Forex and CFD firms (albeit just a very small slice of the vast global FX market) suggests that the rarest people are very successful traders. Most people stop once they start losing beyond a certain threshold, whereas the big winners keep on trading.

The number of small losers slightly outweighs the number of small winners, mainly because of the effect of market spread. So the percentage of successful Forex traders is not substantially smaller than unsuccessful ones. There is little doubt, though, that the most successful traders are an elite few.

However, by looking at a select group of famous Forex traders we can see that they have a few things in common.

  1. Discipline—the ability to recognise when a trade is wrong and therefore minimise losses.
  2. Risk control—having a strong understanding of a trade’s risk/reward. You can read more about this in our risk management guide.
  3. Courage—the willingness to be different from the rest of the crowd, most of the time.
  4. Astuteness—judging how perceptions are shaping market trends.

The upshot of these characteristics has been consistent and large profits.

The world’s best Forex trader

Let’s begin our review of Forex successful traders by looking at one of the industry’s legendary beacons of good fortune, George Soros.

Mr Soros is known as one of the greatest investors in history. He sealed his reputation as a legendary money manager by reportedly profiting more than £1 billion from his short position in pound sterling. He did so ahead of Black Wednesday, 16 September 1992.

At the time, Britain was a part of the Exchange Rate Mechanism (ERM). This mechanism required the government to intervene if the pound weakened beyond a certain level against the Deutsche Mark.

Soros successfully predicted that a combination of circumstances—including the then high level of British interest rates and the unfavourable rate at which Britain had joined the ERM—had left the Bank of England vulnerable.

Britain’s commitment to maintaining the pound’s value against the Deutsche Mark meant intervening when the pound weakened by either buying sterling or raising interest rates or both. The recession meant that higher interest rates were very painful for the rest of the economy. This hindered investment when encouragement was needed instead.

Economists at the Bank of England recognised that the appropriate level of interest rates were far lower than those required to prop up the pound as part of the ERM. But the value of sterling was maintained because of the UK’s public commitment to buying sterling.

In the weeks leading up to Black Wednesday, Soros used his Quantum Fund to build a large position short of sterling. But on the eve of Black Wednesday, comments came from the President of the German Bundesbank. These comments suggested certain currencies could come under pressure.

And this led Soros to increase his position considerably.

When the Bank of England began buying billions of pounds on the Wednesday morning, it found the price of the pound was little moved. This was due to the flood of selling in the market from other speculators following Soros’ lead.

A last ditch attempt to hike UK rates that had briefly hit 15%, proved futile. When the UK announced its exit from the ERM and a resumption of a free-floating pound, the currency plunged 15% against the Deutsche Mark and 25% against the US dollar.

As a result, the Quantum Fund made billions of dollars and Soros became known as the man who broke the Bank of England.

Want to know the best part?

Although Soros’ short position in the pound was huge, his downside was always relatively restricted. Leading up to his trade, the market had shown no appetite for sterling strength. This was demonstrated by the repeated need for the British government to intervene in propping up the pound.

Even if his trade had gone wrong and Britain had managed to stay in the ERM, the state of inertia would have more likely prevailed than a large appreciation in the pound.

Here we see Soros’ strong appreciation of risk/reward – one of the facets that helped carve his reputation as the best Forex trader in the world. Rather than subscribing to the traditional economic theory that prices will eventually move to a theoretical equilibrium, Soros deems the theory of reflexivity to be more helpful in judging the financial markets.

This theory suggests there is a feedback mechanism between perception and events. In other words, the perceptions of market participants help to shape market prices which in turn reinforce perceptions.

This was played out in his famous sterling short, where the devaluation of the pound only occurred when enough speculators believed the Bank of England could no longer defend its currency.

He once told the Wall Street Journal “I’m only rich because I know when I’m wrong”. The quote demonstrates both his willingness to cut a trade that is not working and the discipline shared by the most successful Forex traders.

Who else counts?

So George Soros is number 1 on our list as probably the best known of the world’s most successful Forex traders and certainly one of the globe’s highest earners from a short term trade.

But who else is up there?

Stanley Druckenmiller

George Soros casts a long shadow and it shouldn’t come as too much of a surprise that the most successful Forex trader has ties to another of the names on our list.

Stanley Druckenmiller considers George Soros his mentor. In fact, Mr. Druckenmiller worked alongside him at the Quantum Fund for more than a decade. But Druckenmiller then established a formidable reputation in his own right, successfully managing billions of dollars for his own fund, Duquesne Capital.

As well as being part of Soros’ famous Black Wednesday trade, Mr Druckenmiller boasted an incredible record of successive years of double-digit gains with Duquesne before retiring. Druckenmiller’s net worth is valued at more than $2 billion.

Druckenmiller says that his trading philosophy for building long-term returns revolves around preserving capital and then aggressively pursuing profits when trades are going well. This approach downplays the importance of being right or wrong.

Instead, it emphasises the value of maximising the opportunity when you are right and minimising the damage when you are wrong. As Druckenmiller said when interviewed for the celebrated book The New Market Wizards, “there are a lot of shoes on the shelf; wear only the ones that fit.”

Bill Lipschutz

Oddly enough, Bill Lipschutz made profits numbering in the hundreds of millions of dollars at the FX department of Salomon Brothers in the 1980s – despite no previous experience of the currency markets.

Often called the Sultan of Currencies, Mr Lipschutz describes FX as a very psychological market. And like our other successful Forex traders, the Sultan believes market perceptions help determine price action as much as pure fundamentals.

Lipschutz also agrees with Stanley Druckenmiller’s view that how to be a successful trader in Forex, is not dependant on being right more often that you are wrong. Instead, he stresses that you need to work out how to make money when being right only 20 to 30 per cent of the time.

Here’s some of Lipschutz’s other key tenets.

  1. Any trading idea needs to be well reasoned before you place the trade.
  2. Build a position as the market goes your way and exit the same way.
    Then start to ease up once there are signs that the fundamentals and the price action are beginning to change.
  3. There is a need to be aware of the market’s focus.
    FX is a 24-hour market and doesn’t stop moving when you go to bed.

Lipschutz also stresses the need to manage risk, saying that your trading size should be chosen to avoid being forced out of your position if your timing is inexact.

How successful is a successful Forex trader?

We’ve looked at the biggest Forex successful traders, but there is an army of profitable traders out there. Joining the list of people who are able to consistently turn a profit each month trading FX, is an achievable goal.

So, what’s the bottom line?

Well, even the most successful trader had to begin somewhere and if you can regularly generate profits – you can consider yourself a successful Forex trader.

Source: https://admiralmarkets.com/education/articles/trading-psychology/top-three-most-successful-forex-traders-ever?#c!=1 © Admiral Markets loaded 9.1.2018

Short trade in the Dow Jones Industrial future

As I explained in my last post, the cash index pays dividends if held overnight and the future does not cost anything overnight.

As usual, I started the trade slowly built it up with the time

Time Symbol Type Direction Volume Price Profit
2017.12.14 16:21:18 UsaIndDec17 sell in 0.10 24630 0.00
2017.12.14 16:23:04 UsaIndDec17 sell in 0.10 24631 0.00
2017.12.14 16:24:36 UsaIndDec17 sell in 0.10 24630 0.00
2017.12.14 16:30:55 UsaIndDec17 sell in 1.00 24625 0.00
2017.12.14 16:31:07 UsaIndDec17 sell in 1.00 24621 0.00
2017.12.14 16:37:19 UsaIndDec17 sell in 1.00 24638 0.00
2017.12.14 16:39:11 UsaIndDec17 sell in 0.70 24632 0.00
2017.12.14 16:44:25 UsaIndDec17 sell in 0.70 24635 0.00
2017.12.14 16:46:33 UsaIndDec17 sell in 0.30 24633 0.00
2017.12.14 17:25:28 UsaIndDec17 buy out 5.00 24597 700.12

I went short pro cyclical this means after the index had already fallen:

Reduce the risk in the short trade

The long trade in the cash index fell into the stop, sell the future

24645 last high

Wider stop

Still under the red line

Sell at the red line

Risk is rising even more

Sell again

Moving in the right direction

Up again

Sell again

Enormous risk

Sell again

1 point equals 20 Euros so I sold 20 Lots:

Waiting for the new low:

It is coming:

Reduce the risk to 10 points:

Still no new low

Next aim 24620

It is coming

There it comes

Still on the horizontal support:

Here we go:

Back in the minus

Down again

Correct the red line

Stop to entry level

Stop into profit so that the daily loss is compensated:

83 points down from the ATH

Take the profit stop further down:

Set the profit stop to a new record profit of 550 Euros:

Keep it or sell it?

Finally stopped out

Benefits of Qi Wireless Charging

One of the things we hear the most after consumers use Qi wireless charging for the first time is, “it’s so simple” or “how did I go without wireless charging before?” Most people don’t realize the convenience of wireless charging until they use it throughout their daily life.

Have you ever experienced this before?

[youtube https://www.youtube.com/watch?v=Yh1gJ6YyfNM?rel=0&autoplay=0&controls=1&fs=1&showinfo=1]

When you have Qi wireless chargers by your bed, in your car, at work or on the go, you can have confidence and never have to worry about a dead battery. Most users of wireless charging find that they do “power grazing”, that is instead of just putting their phone down on a desk, table, or car console when not in use they put it on their Qi wireless charger.  If they need to use their phone they just pick it up.  No wires to fumble with and their phone keeps a healthy charge all day without even thinking  about charging.

Cue happy dance!

[youtube https://www.youtube.com/watch?v=Jvd_ZXkcfkY?rel=0&autoplay=0&controls=1&fs=1&showinfo=1]

You have probably heard about wireless charging being embedded in phones like the new iPhones or Samsung devices. But what you might not know is that Qi wireless charging is already installed in thousands of public locations worldwide, with more being added every day. You might already find wireless charging spots in hotels, airports, travel lounges, restaurants, coffee shops, businesses, stadiums and other public places. You can even find wireless charging installed in over 80 car models from Mercedes-Benz to Toyota or Ford.

Now that Apple is including Qi wireless charging in their new phones the number of locations that will start offering wireless charging will accelerate so that soon you won’t have to hassle with carrying cords, trying to find an outlet or worry about running out of battery power.

Many wireless charging products and accessories are readily available today, but with growing popularity also comes the influx of fake, knock-off wireless charging products.

Products using the Qi standard must be tested rigorously to help ensure safety, interoperability and energy efficiency. Only products that have passed these independent laboratory tests can use the Qi logo and are considered “Qi Certified.” Be cautious of claims of “Qi compliant,” “Qi compatible” or “Works with Qi,” as these may indicate a product has not undergone Qi certification testing.

Learn more about how to tell if a product is Qi certified here, or check our database and search for any product for its certification here.

Still have questions? Watch this quick video on what to know before buying a wireless charger.

[youtube https://www.youtube.com/watch?v=0NK4DdZu5-Y?rel=0&autoplay=0&controls=1&fs=1&showinfo=1]

Source: https://www.wirelesspowerconsortium.com/blog/278/benefits-of-qi-wireless-charging loaded 13.1.2018

%d bloggers like this: