0

Given the function f:

definition f :: "real => real"
where "f x = x"

I can show that as n tends to 0, f(x+n) tends to f(x) by the following lemma

lemma "(λn. f(x+n)) -- 0 --> f x"
unfolding f_def
apply (auto intro!: tendsto_eq_intros)
done

As a further step, I want to show that as (y-x) tends to 0, f(x + (y-x)) tends to f(x). Essentially, letting n=y-x.

I am having trouble solving this problem as I can not substitute for the lambda operator, or even let n=y-x.

How can I solve this problem?

creator22
  • 9
  • 1

4 Answers4

0

Convergence always needs a function to identify what changes. However, the statement "(y - x) tends to 0" does not make the changing part explicit. Which of the variables change? If x is fixed and y changes, then you can express this as (%y. x - y), if both x and y change, then it is %(x, y). x - y.

Once this is settled, you can then use the composition theorem LIM_compose_eventually for limits. Note however that the filter at, which -- _ --> uses internally, does not evaluate the function at the target point itself. For example, (%n. f (x + n)) -- 0 --> f x holds, but also (%n. if n = 0 then 10000 else f (x + n)) -- 0 --> f x. Therefore, if the function %y. x - y equals 0 in a neighbourhood, then this theorem will not work. In that case, you should be able to prove your result directly.

Andreas Lochbihler
  • 4,998
  • 12
  • 10
  • Is there a way I can let h = y-x, and then show that "(λh. f(x+h)) -- 0 --> f x"? I think this would be the most intuitive solution. Also, I am having difficulty proving that "(λ(x,y). x-y) -- 0 --> (0::real)". Can it be done? – creator22 Jan 28 '15 at 12:45
0

You can proof

"(λ(x,y). x-y) -- 0 --> (0::real)"

by rewriting it into:

"(λx. fst x - snd x) -- (0, 0) --> (0::real)"

then the tendsto_eq_intros should work.

Johannes
  • 161
  • 1
0

Short answer

  • Being forever bogged down in low-level stuff, I forget all my calculus, but I do this.
  • I resort to the epsilon-delta definition to try and check myself (wiki epsilon-delta limit).
  • I require that I only take limits of 1-variable functions.

You start with a h-form or delta-x form of a limit. I translate it here, with my informal limit notation, into what I think the HOL notation means:

 F1: limit[h -> 0]f(x + h) = f(x)

Let h = y - x. So now, the delta-x is defined. By simple substitution, I get this:

F2: limit[(y - x) -> 0]f(x + (y - x)) = f(x)

To make sense of it, I say this:

for a fixed value of x, y varies to x, and so |y - x| goes to zero, with a limit of f(x).

In the long answer, I expand F2 with the epsilon-delta definition to check whether the limit formula is valid.

Converting everything to HOL, and using the non-abbreviated notation, I get this:

lemma "(((λy x. f(x + (y - x))) x) ---> f x) (at (0::real))"
  apply(simp add: f_def )
  by(metis tendsto_const)

You can check out the details and comments in my long answer to see if I'm right. There is the formula f(x + (y - x)) = f(y) of note.

Long answer

I include a complete theory at the bottom, but elaborate a little on it first.

As A. Lochbihler makes reference to, I think the starting point is to decide what space we're working in. Is it 1-variable or 2-variable calculus?

Your function f is 1-variable, so I take it as a hard requirement. This means that I do my best not to resort to taking the limit of a 2-variable function.

I think the problem is primarily just matching up standard limit notation with HOL limit notation.

I switch to the non-abbreviated HOL notation for a limit: (f ---> L) (at a). That's for myself, and it comes from Topological_Spaces.thy#l1868:

(*abbreviation
  LIM :: "('a::topological_space ⇒ 'b::topological_space) ⇒ 'a ⇒ 'b ⇒ bool"
        ("((_)/ -- (_)/ --> (_))" [60, 0, 60] 60) where
  "f -- a --> L ≡ (f ---> L) (at a)" *)

Here, I now give two different forms of a limit, using my own notation:

F0: limit[x -> c]f(x) = f(c)

F1: limit[h -> 0]f(x + h) = f(x)

I get the third form by simple substitution, as shown above:

F2: limit[(y - x) -> 0]f(x + (y - x)) = f(x)

Two HOL forms of F2 are at the bottom of the theory, which are these:

lemma "(((λy x. f(x + (y - x))) x) ---> f x) (at (0::real))"
  apply(simp add: f_def )
  by(metis tendsto_const)

lemma
  fixes g :: "real => real => real"
  assumes "g = (λy x. f(x + (y - x)))"
  shows "((g x) ---> f x) (at (0::real))"
    apply(simp add: f_def assms)
  by(metis tendsto_const)

Here's the theory, copied from Notepad++, so that it's all ASCII characters.

theory i150128a_limits
imports Complex_Main begin
(*abbreviation
  LIM :: "('a::topological_space => 'b::topological_space) => 'a => 'b => bool"
        ("((_)/ -- (_)/ --> (_))" [60, 0, 60] 60) where
  "f -- a --> L \<equiv> (f ---> L) (at a)" *)
  
--{*| F0: limit[x -> c]f(x) = f(c) |*}
    (*Constant c is fixed, and x will vary to c, so (x - c) or (c - x) goes 
      to 0.*)
      
--{*| F1: limit[h -> 0]f(x + h) = f(x) |*}
    (*For a fixed value of variable 'x', 'h' will vary to 0, so 
      '(x + h)' will go to x.
     Limit definition for this limit:     
     ALL e > 0. EX d > 0. if 0 < |h - 0| < d then |f(x + h) - f(x)| < e. 
     So, if 'f x = x', then 0 < |h| < d --> |x + h - x| = |h| < e. Let d = e.*)
  
--{*| HAVE: f definition |*}
  definition "f x = (x::real)"
      
--{*| HAVE: The lemma formula. It appears to match up with F1 above. For a
      fixed value of variable 'x', the bound variable 'h' in  formula 
      '(\<lambda>h. f(x + h))' will vary to 0. |*}
      
  term "((\<lambda>h. f(x + h)) ---> f x) (at (0::real))"
  
--{*| LET: h = y - x.                              |*}
--{*| By substitution in F1:                       |*}
--{*| F2: limit[(y - x) -> 0]f(x + (y - x)) = f(x) |*}
    (*Similar to F1, for a fixed x, y goes to x, so (x - y) and (y - x) will 
      go to zero.*)     
        
    (* ALL e > 0. EX d > 0. 
         if 0 < |(y - x) - 0| < d then |f(x + (y - x)) - f(x)| < e. 
       If 'f x = x' then if 0 < |y - x| < d 
         then |x + (y - x) - x| = |y - x| < e. Again, let d = e. *)

--{*| In the next lemma, 'g = ((\<lambda>y x. f(x + (y - x))) x)' is a  
      1-variablefunction in which 'y' varies to 'x', as in F2 above.  
      There is'x' in the 'shows' formula, which is a free variable. Though
       it is a variable, for a fixed 'x', the 'x' in 'g' is the same 'x' as 
       in 'f x'.|*}    
  lemma
    fixes g :: "real => real => real"
    assumes "g = (\<lambda>y x. f(x + (y - x)))"
    shows "((g x) ---> f x) (at (0::real))"
      apply(simp add: f_def assms)
    by(metis tendsto_const)

--{*| The consolidated form. |*}
  lemma "(((\<lambda>y x. f(x + (y - x))) x) ---> f x) (at (0::real))"
    apply(simp add: f_def )
    by(metis tendsto_const)
    
end
Community
  • 1
  • 1
0

Update to my other answer: My HOL formulas are wrong

I don't explain why I operate this way, but I can't edit the other answer, and the HOL formulas are wrong. Correcting things like this ends up causing clutter, so I'll try to stay away from all this.

My correction ends up being longish, which could be perceived as creating even more clutter. If my 2 answers are deleted, I wouldn't really care. I just make an attempt to correct myself in several ways. Earlier, I submitted an edit to the 1st answer. If it goes through, things get even more clutter.

The short answer, about what's wrong, is that in the lambda calculus functions, I have a y x where I should have put a x y, which wouldn't have allowed the lemma to be proved. One lesson is that a person can't prove anything untrue (assuming HOL's consistency), but a person can prove something meaningless, which I already knew, having proved many meaningless things.

Corrected, it's like this:

--{*| The consolidated form CORRECTED AND BOGUS. |*}
  lemma "(((%x y. f(x + (y - x))) x) ---> f x) (at (0::real))"
    apply(simp add: f_def )
  (*GOAL: (%y::real. y) -- 0::real --> x*)
  oops

My former attempt to investigate h as a delta x

It seems to me that the basis of the question revolves around an attempt to make explicit the meaning of h in a standard limit formula.

I think it's a legitimate exercise to go through. That the different limit formulas are equivalent can be trivially seen, but, for a particular person, not knowing how to formalize the trivial in HOL makes things non-trivial, where the end result may even be a case of "this is a trivial problem, why isn't it's formalization trivial?".

Taking an h form to the x - c form, failing to get it all in a HOL form

From my perspective, this is mainly related to two problems:

  • In the use of standard limit formulas, they define something like h = x - c, where c is a constant, but
    • in my searches, I don't find any place where anyone ever explicitly substitutes x - c for h,
    • so I can't authoritatively check myself on whether my substitution follows standard, notational conventions,
    • and I don't have the time to dig through real analysis books to make sure I have a precise understanding of any and all notation involved.
  • Additionally, standard limit notation notation for 1-variable calculus needs to be converted into a HOL limit function, but
    • I don't know how to do that. I generally like to operate under healthy paranoia, but here, I end up needing someone who knows HOL, but also has calculus and real analysis fresh on their mind.

Getting to the point (obfuscated point part 1)

I don't know what exactly the OP is and was thinking, but I take as a starting point the limit of a 1-variable, continuous function, since his f is equivalent to id. Here, I use my informal notation:

F0: limit[x -> c]f(x) = f(c).

In my first question, I say something like "for a fixed x, y varies to x". However, this is 1-variable calculus, so there's only one thing that varies, and that's x. In the formula, c is a constant. Having healthy paranoia, right now, I'm looking for my personal mathematician to say, "Yes, of course. That's trivial." If I was partially confused the first time, then what does that say? It could be deja vu all over again.

I want to think in terms of delta x, so I let h = x - c, and I replace F0 by this next F1:

F1: limit[h -> 0]f(c + h) = f(c).

I'm not liking this at all. The use of h doesn't normally come into play until derivatives. My answer keeps getting longer, because I feel compelled to say things like, "I see stuff similar to this calculus books, but I want to find where it's completely formalized. I looked in Apostol's real analysis book at the limits and derivatives sections, and I didn't see any sloppy use of h, but then I didn't see any use of h for the short time I looked, and I don't have time to keep looking."

Anyway, I want to be explicit about what h is, so I get this:

  F2: limit[(x - c) -> 0]f(c + (x - c)) = f(c).
= F3: limit[(x - c) -> 0]f(x) = f(c). 

Basing a "typical calculus book" on Stewart's textbook, the author will clarify what h is, and here is one point where Stewart does that, in a limit where h -> 0 is being used:

Notice that as x approaches c [actually, 'a' in the book], h approaches 0 (because h = x - c) and so the expressions for the slope... [Stewart, 6th, page 145]

The formalization is all in his parenthesized phrase, "(because h = x - c)", which is no formalization at all. But then, his books isn't meant to be a completely formal book on real analysis, though it is mostly rigorous.

The point? (finally, the final obfuscated point)

There's kind of a dilemma, the purpose of h is to emphasize f(c + h), that h is going to 0. But if you do the substitution, then you end up with just f(x), as in F3. After the substitution, there's no variable in f going to 0. We're back to x going to c, as with my starting point, F0.

Finally, consider this HOL lemma:

lemma 
    "((%x. f(c + (x - c))) ---> f c) (at (0::real))"
  apply(simp add: f_def )
(*GOAL: (%x::real. x) -- 0::real --> c*)
  oops

That's not what I need, but it represents what I need. I need a fixed constant c. The 0 alone is no good, because that's basically saying x is going to 0. What I need is to say that x - c is going to 0.

I don't know how to fix it all.

There are trivial simplifications involved, like with f(c + (x - c)) = f(x), but the right answer for me is not, "Well, it' trivially equal, can't you see that?" I think so, but I can also make the most trivial of mistakes.