Linear Programming

Table of Contents

Overview

Linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints.

It's feasible region is a convex polytope.

Linear programs are problems that can be expressed in canonical fomr as:

linear_programming_813fb687150e0b33afd18baf128fefd5b1b0e3cc.png

Notation

  • linear_programming_ed39d9a397196f8f0ce6388b0ea4e0c1dd8becee.png denotes the vector of variables
  • linear_programming_00ada20d7e149a775bac48dd84363b1c03ecb345.png and linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png are vectors of known coefficients
  • linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png is # of constraints
  • linear_programming_f07b932b12c91bca424fd428d383495413b5b6a9.png is # of variables
  • linear_programming_11f664e66e5e47c91aefa82bdc361723ec3325db.png and linear_programming_549f574b33338def2814b2f78bcfe849b40c8f83.png where linear_programming_5315ab6e1e0ac72f85ea1eda2b7a8d46d44a82d2.png
  • linear_programming_d80b633218f04f7a25f6a9dd7e4617a38e88f263.png denotes the set of feasible / possible solutions

From the basics

Consider the following LP problem:

linear_programming_310ffc41dc83fb4cdf5d111ac3d9d630ad9a1e90.png

Then observe the following:

  1. We an rewrite the objective:

    linear_programming_aa8a88c9a6b6cd7c5e84aaac6321d5afc8fea324.png

    I.e., minimizing linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png corresponds to finding linear_programming_91eeb509809f5291f6d43d562c380df4d733cb9b.png and linear_programming_8802b92c7bbe01509ff2b6f6dd3c58c15fe07b19.png such that the smallest value linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png can take on and still satisfy the equality is as small as possible!

  2. We can rewrite an equality of the form

    linear_programming_cf287158ce340aa14c019bbf917561a11e21c89d.png

    EXPLAIN THIS PROPERLY

If we do this for all inequalities, we observe that we're left with a simple linear system:

linear_programming_f5dfd6202f0cad634e21e2621bf9893c9270bf6f.png

Which we can simply solve using the standard Linear algebra! There are a couple of problems here though;

  1. Not encoding the minimization of the objective
  2. Not encoding the fact that the variables are non-negative (positive or zero)
  3. If linear_programming_22ed71eab032fb136759934c185b5e83ea6d1cf4.png, i.e. have more variables than constraints, then we have multiple possible solutions

Therefore we have to keep track of these two points ourselves;

The Simplex Method is a method which helps us with exactly that by doing the following:

  • Gives us certain conditions which have to be true if the solution we're currently considering is hte best solution we can in fact obtain
  • How to obtain other solutions given the solution we're currently considering (where we can just check which one is the best one by comparing the values linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png takes on for the different solutions)

To address the first point: "how do we know that the solution under consideration is best ?" we make the following observation

  • linear_programming_d3c11f81ce63bbeacdabcf9658cbe83f075752f2.png then linear_programming_e089f2222738beb0db09f4711cc9e8565f40748e.png will have the best possible value of linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png if linear_programming_c12d3150a2403223d1a0170eb8d3ec1010f439da.png, right?
    • Since the coefficients of linear_programming_91eeb509809f5291f6d43d562c380df4d733cb9b.png and linear_programming_8802b92c7bbe01509ff2b6f6dd3c58c15fe07b19.png are both negative, then any increase in the values of linear_programming_91eeb509809f5291f6d43d562c380df4d733cb9b.png and linear_programming_8802b92c7bbe01509ff2b6f6dd3c58c15fe07b19.png will lead to decrease in the objective function linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png!

The Simplex method makes use of this fact by observing that if we can transform the system of linear equations such that the coefficients of linear_programming_42b3c8f6ed56c7cfb60e6194029bee525c549d11.png are all

Introduction

We are given an input of the following linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png constraints (inequalities):

linear_programming_006d8f21e8a2873288216f71e34b25932a59f875.png

where linear_programming_de3a830ce9dc054c2bab83ddf427167a90301700.png for linear_programming_11f664e66e5e47c91aefa82bdc361723ec3325db.png and linear_programming_549f574b33338def2814b2f78bcfe849b40c8f83.png. And then we have some (linear) objective function we want to optimize wrt. linear_programming_4df9a82acc9ce78b2df933379723ea2c69403142.png, i.e. linear_programming_ed39d9a397196f8f0ce6388b0ea4e0c1dd8becee.png such that the above constraints are satisfied.

Can always change the "direction" of the inequalities by simply multiplying one side by -1.

Also allow equality since

linear_programming_4f4244e714507e1135df4da53aa9133bbc784504.png

but NOT strict inequalities!

Fourier-Motzkin Elimination

Let's do an example: assume that we're working with a 3-variable 5-constraint LP, namely,

linear_programming_952b76d069b8688de13f22e68b11cb8e4fc96aea.png

Fourier-Motzkin elmination is then to recursively eliminate one of the variables. Let's do linear_programming_3c314f80373742988ad542a6d4ce66a111b17847.png first:

  1. "Normalize" the constraints wrt. linear_programming_3c314f80373742988ad542a6d4ce66a111b17847.png, i.e. multiply constraints by constants such that coeff. of linear_programming_3c314f80373742988ad542a6d4ce66a111b17847.png is in linear_programming_60ac559d72b4e37fddee0b0669fe7d3b8e5f4f87.png

    linear_programming_fac2f04efac54f71d59140409961b9f83f03cdc0.png

  2. Write inequalities as:

    linear_programming_bd7e5107e9377a7eead78b574a493972ec8b8e7b.png

    for some linear_programming_2eea71c153695ae6eac43bedb531b3eba7eedebc.png depending on whether linear_programming_3c314f80373742988ad542a6d4ce66a111b17847.png has coefficient linear_programming_7ba43ff00864c097bf5a587573e23164e2417a0b.png or linear_programming_66c1fde085072fe4dbd43a4f2fc7da3bdcb70e6b.png (if coeff is 0, we just leave it). Then we get,

    linear_programming_fbeb55a8c195fde260017a65d93fddcff547d655.png

    where we for example have done:

    linear_programming_525d84849c7023551edc3d07e3cf4175875580d9.png

  3. For each pair of "different" inequalities, we "combine" them to form:

    linear_programming_5201abd0a34228020862cb22f97546c84bc4793a.png

    Thus we have eliminated linear_programming_3c314f80373742988ad542a6d4ce66a111b17847.png!

  4. Repeat step 3 until we only have a single variable left, leaving us with a set of constraints for this last variable. If the constraints remaining cannot be satisfied by the remaining variable, then clearly this is an infeasible problem, i.e. linear_programming_557cd7e1187463ab978168865e83ce0527a37eb6.png.

Fourier-Motzkin Elimination actually solves the LP-problem; however, it does so incredibly inefficiently. As we'll see later, these problems can be solved in polynomial time.

FM elimination start out with linear_programming_094b02afce734f4ce51933d0093ef3d2da9f8123.png inequalities, and in the worst case we end up with linear_programming_70912da23fe5f2b7d30f43290f0dcf2a4921712a.png new inequalities (due to pairing up all the "different" inequalities).

Equational / canonical form

We can constrain every variable linear_programming_e9f4d218474bc10aa94958ec30139aee865c0173.png to be linear_programming_9751b870357a44a02fdf9f9882cd699f168872fc.png by introducing two variables linear_programming_fdefa92e0793b56a0d499f685526eab77d23fa49.png for each variable linear_programming_e9f4d218474bc10aa94958ec30139aee865c0173.png. Then replace each occurence of linear_programming_e9f4d218474bc10aa94958ec30139aee865c0173.png by linear_programming_f99be69972248822291cc8db4d35b590097a7a7b.png and add the constraints linear_programming_dc8c7e3878fdddd3d139d226ec853771b2a1b40b.png.

In doing so we end up with a positive orthant in linear_programming_c56d2a2224fedeb46b49b50d1b41f11e963216ce.png and constraints linear_programming_5184fd5260183d42f5f7818bd93528466c1cc2cd.png is a linear_programming_4b5ba3bf0549e830407d5ff876eaa496e4346fb4.png dimensional subspace of linear_programming_c56d2a2224fedeb46b49b50d1b41f11e963216ce.png.

TODO LP and reduction

Solving LP (given feasible)

First we want to find Suppose linear_programming_7c80d8c7da2c936f89cc3567fe0e52b7645e77ba.png. Then we only need to find linear_programming_51fd3b35b9d78a473496695e74fb8c4d22f8889c.png such that

linear_programming_4acecb0aeba9288eff8f93f8620f1f0abc94d3dd.png

i.e. some feasible solution linear_programming_652054290aaf24427a2042dcf57a992cea705239.png.

We also assume that the feasible set is /bounded/1, where linear_programming_5e1fb9e758795102611fe998f9f9b7c19b3ca4a0.png. We then use binary search to

Basic feasible solution

In geometric terms, the feasible region defined by all values of linear_programming_ed39d9a397196f8f0ce6388b0ea4e0c1dd8becee.png such that

linear_programming_a41d9be3c89876350eac75133b77cc9a0c6815b1.png

is a (possibly unbounded) convex polytope. There is a simple characterization of the extreme points or vertices of this polytope, namely an element linear_programming_b9add274efbba8d4bd1e19ade2f37af1d514f850.png of the feasible region is an extreme point if and only if the subset of column vectors linear_programming_481ba121eb1e7727e9bbec4a6485feadca1c78c4.png corresponding to the non-zero entries of linear_programming_ed39d9a397196f8f0ce6388b0ea4e0c1dd8becee.png (linear_programming_75793b79260cc780f36643f6e4a9af9b0cc36d0e.png) are linearly independent.

In this context, such a point is know as a basic feasible solution (BFS).

The BFS will always be of the same dimension as the number of constraints. That is,

linear_programming_7c842931727937b12df9c98883dd23caa5255f77.png

since the number of constraints determines the number of independent variables in our system. Why? Well, if we have 3 variables and 2 constraints, then that means that one of the variables, say linear_programming_e6d4f3d47d675d6505dd6f594394aa6bb212dfeb.png, can be expressed as a function of one of the other variables, say linear_programming_2c5024c544eadf13553ff6f41d60836455b37ed9.png, and so on.

What does this mean? We can obtain some initial BFS by setting enough variables to zero such that we're left with only linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png variables (where we should have only one variable being non-zero in each constraint). In this way we obtain some linearly indep. system from which we can start pivoting.

We call the variables currently in the basis basic variables, and those not in the basis non-basic variables.

TODO Initial BFS

  • If all the constraints in the original LP formulation are of the linear_programming_216c9d87eb6e486c1498cb24260552ba828ecf1b.png type, we can readily obtain an initial BFS for Simplex, consisting of all the slack variables in the corresponding standard form formulation
  • Need to find an initial BFS, since we might not start with a matrix which has perpendicular columns (i.e. the columns are not orthonormal)

Consider the following general linear problem:

linear_programming_310ffc41dc83fb4cdf5d111ac3d9d630ad9a1e90.png

which in "standard form" becomes

linear_programming_6c2829709e33be31a6d0c264a2405915df55b76f.png

where we've turned the inequalities into equalities by introducing slack-variables linear_programming_3b7451ad7996898d28f6b4fda8ffae3696504e26.png and linear_programming_e6a4f0760a4bb073e5809c6ac1d992fcbe65c737.png.

  • linear_programming_e6a4f0760a4bb073e5809c6ac1d992fcbe65c737.png cannot be a basic variable (included in the basis) for linear_programming_0263575313f4c86e29176348fc316fdc3e3f07a5.png, even though it appears only in this constraint, since it would imply a negative value for it (i.e. linear_programming_df16c83aceec8b168305dc5fbdd739a1f3f35a5a.png), and the resulting basic solution would not be feasible
  • linear_programming_06d23333d51f97f0775473488361eb6462413339.png does not even include an auxillary variable

To fix these two problematic constraints linear_programming_0263575313f4c86e29176348fc316fdc3e3f07a5.png and linear_programming_06d23333d51f97f0775473488361eb6462413339.png we "synthesize" an initial BFS by introducing additional artifical variables linear_programming_4770acc83810d465ecc2afe336e2f1beb448775a.png and linear_programming_f9d3df4ce983df51f62373614db21bc9caafc21a.png for linear_programming_0263575313f4c86e29176348fc316fdc3e3f07a5.png and linear_programming_06d23333d51f97f0775473488361eb6462413339.png, with linear_programming_3d062d8dfd9aeb75329244aa011faaf0b9b16ae7.png.

  • Initial BFS is linear_programming_bede9deb6660deb1282ddc74bba5460847eadf47.png with linear_programming_384ae80c38178bebb55ab778a1b2ab54f2e0cdd2.png
  • This alters the structure, and therefore the content, of the original LP
  • Easy to check that this BFS linear_programming_6b80266137e6d3f09dfa7b307a48facb919650a4.png (together with the implied zero values of the nonbasic variables) is not feasible for the original LP!
  • BUT, if we obtain another BFS for the modified problem in which the artifical variables are non-basic , then this BFS would also be feasible for the original LP (since the artifical variables, being nonbasic, would be zero in the "canonical form")

To get artifical variables linear_programming_e89f6e7a35d1d72148fa1727331752c085771f38.png to zero, we try to achieve the following objective:

linear_programming_9d7f754b07d6848045d253047f6a9e3e937cc908.png

This is known as Phase 1 of the Simplex method. Observe

  • If original LP has a feasible solution, then nonnegativity of linear_programming_4770acc83810d465ecc2afe336e2f1beb448775a.png and linear_programming_f9d3df4ce983df51f62373614db21bc9caafc21a.png implies optimal value for altered LP will be zero!
    • Hence, at the end of this altered LP we will have a feasible solution for the original formulation!
  • If, due to the strictly positive optimal objective value for altered LP, artifical variables linear_programming_e89f6e7a35d1d72148fa1727331752c085771f38.png are absolutely necessary to obtain a feasible solution for this set of constraints => original LP is infeasible

Therefore, all we do is solve the altered system, and if we get a solution for this without the introduced artifical variables linear_programming_e89f6e7a35d1d72148fa1727331752c085771f38.png in the BFS, we have an initial BFS for the original LP!

TODO Simplex method

Have an LP in (primal) form:

linear_programming_eb73ddbc0ad99a161be157bac25511ef14e4c4fe.png

Note the inequality of constraints.

  1. Add slack variables linear_programming_3cd6585c90a8675bffac04901b83b7c623fc740a.png to each inequality, to get:

    linear_programming_7cac6d0e57fe7b1afe99fe041469fd2f864c58da.png

    Why bother?

    • We're now left with the familiar (primal) form of a LP
      • To make it completely obvious, one can let linear_programming_3c62801c8b101b42fd4b6cbdfbae4a86fb13848d.png and add the identity matrix to the lower-left of the matrix linear_programming_d36ba5e7bd73c1f88efce6f8c8de9dac7df701d7.png

        linear_programming_236661b322fc0bf54f98118520c02f4e272e14c6.png

        Then we can write the problem with slack-variables as a system of linear equations:

        linear_programming_8ad7e91c6245adde5a3e5e0859c803a1ba9ece3c.png

    • Every equality constraint now has at least one variable on the LHS with coeff. 1, which is indep. of the other inequalities
    • Picking one such variable for each equality, we obtain a set linear_programming_aa45b78ca4fcfb1128b81983bc69cebaddb6486d.png of linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png variables called a basis (variables in the basis are called basic variables); an obvious choice is

      linear_programming_6387c28c125eee0e4ce1fb7ba7198f8c9ea240f7.png

  2. Rearrange constraints to get

    linear_programming_415c8365a7c0ddcd1f5de262ee6aebbd8fb7f08a.png

    Just for the sake of demonstration, suppose all constraints are positive, i.e. linear_programming_9af77dfdbd81b65c8035f5715798ab94acb5450e.png. Then clearly the best solution is just letting linear_programming_bf48c8d3ca2da068c2d84f74b06f35f74fe4fcbf.png. This then corresponds to a vertex.

  3. Nowe we have some initial suggestion for a solution, i.e. a basic feasible solution (BFS) with basis linear_programming_aa45b78ca4fcfb1128b81983bc69cebaddb6486d.png. To find other possible solutions; pivot! Let the current basis be

    linear_programming_1e9d776f749cfb2dd64c01c5c52b6daf71886131.png

    with linear_programming_9fce42b3b82bd91c35798e2a4f442c85efe3b20d.png being a variable on the LHS of the r-th constraint linear_programming_afb1bc7ef6c183a97b917579e762d5b06c9009bf.png (in the above rearrangement). We then suggest a new basis linear_programming_db73d623f16e63dec9e6ff05eaa670efdf3c7a61.png

    linear_programming_493e3540b140d151a5f787898983379509a449b5.png

    i.e. removing linear_programming_9fce42b3b82bd91c35798e2a4f442c85efe3b20d.png and choosing some other linear_programming_e27f5d96a277220941afcc8233c49d665a7defe0.png on the LHS of constraint linear_programming_afb1bc7ef6c183a97b917579e762d5b06c9009bf.png, which is performed as follows:

    1. Assuming linear_programming_afb1bc7ef6c183a97b917579e762d5b06c9009bf.png involves linear_programming_e27f5d96a277220941afcc8233c49d665a7defe0.png, rewrite linear_programming_afb1bc7ef6c183a97b917579e762d5b06c9009bf.png using linear_programming_46c301d5370cef47729dc4b21875e7a5fefa6b16.png
    2. Substitute linear_programming_46c301d5370cef47729dc4b21875e7a5fefa6b16.png in all other constraints linear_programming_e14af65327c0bb1432c75dc37ebcc098aff42864.png, obtaining constraints linear_programming_a9f5ea480f195488e9fc0935d2e2da4c0c64424d.png
    3. New constraints linear_programming_a9f5ea480f195488e9fc0935d2e2da4c0c64424d.png have a new basis

      linear_programming_5b0a4b4a3642620886fd0d4baf4d9bed559bc302.png

    4. Substitute linear_programming_46c301d5370cef47729dc4b21875e7a5fefa6b16.png in the objective linear_programming_be61dd3b009faae59aa34a7eeb9c1ee5d1bb1bde.png so that linear_programming_be61dd3b009faae59aa34a7eeb9c1ee5d1bb1bde.png again only depends on variables not in basis

Of course, we're not going to pivot no matter what! We pivot if and only if:

  • New constants linear_programming_9af77dfdbd81b65c8035f5715798ab94acb5450e.png, since this is a boundary condition
  • New basis must at least not decrease objective function

Selecting an entering variable (to the basis)

  • Given that at every objective-improving iteration the Simplex algorithm considers adjacent bfs's, it follows that only one candidate non-basic variable enters the basis
  • Want variable with greatest improvement to the objective to enter basis
  • For a variable to enter, we need one to leave:

Simplex Tableau

Take a LP-problem in primal form, we can write it as an augmented matrix with the following:

  • 1st column corresponds to coefficent of linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png (objective function)
  • 2nd through n + 1 corresponds to coefficients of linear_programming_e9f4d218474bc10aa94958ec30139aee865c0173.png, i.e. linear_programming_b628764d0635802d2b9bfc6478845b6323e92518.png
  • Last (n + 2) column is the constant in the objective function and RHS of constraints

Then we can write

linear_programming_947ce7d7e755566c24f5e7206133f2d2ec86a635.png

where linear_programming_24db75342343379bc61072ed218d34613269537e.png is simple the value linear_programming_9c15196dd07b1add486b8b54592e74bfe946ed95.png takes on if all variables are zero, i.e.

linear_programming_bbcd909860d26cc95f9b5efcf1027333cc511abb.png

usually we set this to zero, linear_programming_74d3a5c230e76b61323ddf201ee313949f7d7641.png.

Now, this is just a linear system, for which there exists

  • if linear_programming_8e00d2c3b5864f078d1e02aff696c820b44ef405.png: linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png independent variables
    • when we've solved for the linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png constraints, the rest of the variables are simply functions of the constraints
  • if linear_programming_ad412f7e2e823715387ce1e0cb13429e97a918a2.png: linear_programming_f07b932b12c91bca424fd428d383495413b5b6a9.png independent variables (might not be able to satisfy the constraints)

Now we simply make use of Gaussian Elimination to solve this system;

  • unless the matrix has linearly indep. columns and is linear_programming_812a7be1c6b4e6d0c54540febd3d26b36f71738b.png, we do not have a unique solution!
  • therefore we simply look for solutions where we have some linear_programming_fe52c58a96549528260fe0d7d04caf390dd47865.png linearly independent columns, which we say form our basis linear_programming_aa45b78ca4fcfb1128b81983bc69cebaddb6486d.png (see BFS)

<2018-02-05 Mon> But what if the linear_programming_b628764d0635802d2b9bfc6478845b6323e92518.png are not all non-negative?!

They can't be yah dofus! You have optimality exactly when they are all non-negative.

If they're not, you pivot!

Duality

Notation

  • Linear problem (primary form)

    linear_programming_d66d669beac43db0f3a062e9492105fe9428df1f.png

  • linear_programming_ed39d9a397196f8f0ce6388b0ea4e0c1dd8becee.png is the variables to minimize over
  • Linear problem (dual form)

    linear_programming_ef5863383120630cdaaf9c689e84090e9bb72677.png

Primal form

linear_programming_8c7ec686f3fb34e751448526177d75e74f53b3dc.png

Dual form

linear_programming_3127656e0daceb04be3b1a62d87f644ed8fa3bc4.png

Weak Duality Theorem

linear_programming_f01ef32b1e7db3674384c209e8fef79dc9d4cae9.png

That is,

linear_programming_4844a5fde1180325faef190e5b5f8315fc5308ad.png

Farkas Lemma

Let linear_programming_74a486d9c6569d2242e313c021b0b7f6b66d0ac7.png and linear_programming_3f14cb4c491166cd16e689aef4bb050047fb8ca6.png.

Then exactly one of the following statements is true:

  1. There exists an linear_programming_7a9f00f52fc25b76ee5704ac6f6e9bc4da1d48a9.png such that linear_programming_5760a61748c26f2c87001e3dafc90a1a79d3d5b9.png and linear_programming_bde2ff5ede38fc9002efad675bcd590fc1057f2b.png
  2. There exists a linear_programming_6d0768760e8a9d7890967786503784421a0ec229.png such that linear_programming_97718ca16772318a6e7a3de26d781709baa8f8d3.png and linear_programming_33efb59ad645384adc208cd3603749cac8233825.png

The way I understand Farkas lemma is as follows:

  1. linear_programming_7a10debdcd137c96dab9b7432dd4c94f2c3a3a91.png, then clearly linear_programming_4bf08757c95834ceba3434285b790a1ac4cf9fce.png such that linear_programming_5760a61748c26f2c87001e3dafc90a1a79d3d5b9.png
  2. linear_programming_7a10debdcd137c96dab9b7432dd4c94f2c3a3a91.png, then we can find some vector linear_programming_6d0768760e8a9d7890967786503784421a0ec229.png such that the hyperplane defined by

    linear_programming_48f7acc9fcf6e96e50f62eee08430cacb8e3597e.png

    i.e. linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png is the norm of linear_programming_3807b8495f01004f1d88370e2f4fad2f5032db06.png, such that

    linear_programming_c58f966d3999ef44974a27e93c695e8014e3f819.png

    or, equivalently,

    linear_programming_4ae4550f329b8c105e02c1d55eee0f0ef7f9f9ef.png

    Where:

    linear_programming_ade7d3f6836e030b1bc2f3595ae65ddc31a2c160.png

    From now on, we'll use the notation linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png. Now, what if we decided to normalize each of these linear_programming_ff02998d51ab4a1a4a67d47f299a6367a8932e86.png? Well, then we're looking at the the projection of linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png onto the linear_programming_7da3e4ba04b7caa464508fa538b6ec215a0ab351.png. Why is this interesting? Because the inequality

    linear_programming_ff756d52aed67a39b3022618f99924413fa78b70.png

    is therefore saying the the projection of linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png onto linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png, is non-positive in all entries, i.e. the angle between linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png and all vectors in linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png / is linear_programming_089eaee6e8a9f8878fd17ade7c55e4a086e7adbc.png!!! (including the full In turn, this implies that the hyperplane linear_programming_3807b8495f01004f1d88370e2f4fad2f5032db06.png, which is defined by all vectors perpedincular to linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png, i.e. angle of linear_programming_030af68c81a9d7fa7dc3c72a85a8223db0796071.png, lies "above" linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png, since linear_programming_bac600931ece790079d91d64e9ca2b5c12fe385e.png. Finally, we're computing linear_programming_ca83f1e3b492a3c0e95e9dad36b1006d481c1663.png, which is obtaining the angle between the linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png and linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png. Then saying that

    linear_programming_7d84fa4f6900211d97d64e663ebddbcbb39605b9.png

    is equiv. of saying that the angle between linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png and linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png is linear_programming_31537360e8f6c9ff991fc714cf1f05a88153d7b1.png. BUT, if linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png is an angle of linear_programming_31537360e8f6c9ff991fc714cf1f05a88153d7b1.png wrt. linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png, AND linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png is an angle linear_programming_089eaee6e8a9f8878fd17ade7c55e4a086e7adbc.png with linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png. Then, since the hyperplane linear_programming_3807b8495f01004f1d88370e2f4fad2f5032db06.png is at linear_programming_030af68c81a9d7fa7dc3c72a85a8223db0796071.png wrt. linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png, then linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png lies somewhere between linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png and the hyperplane linear_programming_3807b8495f01004f1d88370e2f4fad2f5032db06.png, which in turn lies above linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png, hence linear_programming_3807b8495f01004f1d88370e2f4fad2f5032db06.png lies between linear_programming_31b79b0a21e0429a41e091d4b3f88b3448dbf07d.png and linear_programming_5d1bec66359813fe81f08ce2bb174b8a266d97eb.png.

Strong Duality

The optimal (maximal) value for the dual form of a LP problem is also the optimal (minimal) value for the primal form of the LP problem.

Let the minimal solution to the primal problem be

linear_programming_d62c9f7933189b37538a5aec88f72a00ed584f72.png

Then define

linear_programming_398d29dd384bf59c3aa54df627e33ae8b12a77b4.png

with linear_programming_c89ff0b0ec0d30f3a0441b398e2910e63d3784d9.png. We then have two cases for linear_programming_03d96c52df5ea6a510607e2260e0745d11e4bd85.png:

  • linear_programming_0c580a47ad9658a89fe84725c8f27e53778217b7.png, in which case we 1. of Farkas lemma because linear_programming_7cd54d89adfc120eba37dd1b07f69d348326724d.png

    linear_programming_cedb1feab6712e80d444477fab7f415a4e6e6d1f.png

  • linear_programming_95253ea0c2082f08ac36eed736be50b9577033e8.png, in which case there exists no non-negative solution (because linear_programming_544e971b7c542f60831d1bfc5142af2754ad68f3.png is already minimal) and we have Farkas lemma case 2. Thus, there exist linear_programming_29c26837fff0673a98d6523a80b727d38797f426.png and linear_programming_6e9d20113f7ef024a71fc3edcc10e77417d06393.png, such that

    linear_programming_ae5ff4fa472243caeae38b404156b7e22f37cacd.png

    or, equivalently,

    linear_programming_4c7616e462d5d8ea45f63c1f0fac32769beabef9.png

    Now, we can find linear_programming_07fccf56e565b74bbea2d6a5262ed2f439c0a63b.png such that

    linear_programming_74c47d6cba2f306d9135ec09715fa8146f46c29b.png

    Then, letting linear_programming_95253ea0c2082f08ac36eed736be50b9577033e8.png, from the equation above, we would get

    linear_programming_6a65360e4321519a911b490c714988e7a0336e19.png

    Which we observe is only possible if linear_programming_7d9048578ef68f38893adb80d09b3f6c6f72500e.png, since linear_programming_ebdf911991f3070d214ca511c72bd5e34228f5c3.png. linear_programming_07fccf56e565b74bbea2d6a5262ed2f439c0a63b.png is simply the normal vector of the hyperplane, thus we can scale it arbitarily; let's set linear_programming_9f6ed2cf0191f332e825499853a5c30d4273dc41.png:

    linear_programming_a97c35f0c315ee015b921b0005fd036537cbf4ee.png

    Observe that

    linear_programming_00e52b78c3edc34d8fa86c75c2c3d60ad83c12b0.png

    is a feasible value for the objective of our dual problem. From the Weak Duality Theorem, we know that

    linear_programming_94c473c7c0a62a83a82f684b52712b81aa1d9de2.png

    But, we've just shown that we can get arbitrarily close to linear_programming_544e971b7c542f60831d1bfc5142af2754ad68f3.png, hence at optimality (maximal) value of our dual form we also have linear_programming_544e971b7c542f60831d1bfc5142af2754ad68f3.png!!!

Sources

Footnotes:

1

Unbounded LPs allows infinitively large values for our objective function, i.e. not interesting.