• No results found

Existence and Stability Analysis of Fractional Order BAM Neural Networks with a Time Delay

N/A
N/A
Protected

Academic year: 2020

Share "Existence and Stability Analysis of Fractional Order BAM Neural Networks with a Time Delay"

Copied!
12
0
0

Loading.... (view fulltext now)

Full text

(1)

http://dx.doi.org/10.4236/am.2015.612181

Existence and Stability Analysis of Fractional

Order BAM Neural Networks with

a Time Delay

Yuping Cao

1

, Chuanzhi Bai

2

1Department of Basic Courses, Lianyungang Technical College, Lianyungang, China 2Department of Mathematics, Huaiyin Normal University, Huaian, China

Received 25 September 2015; accepted 22 November 2015; published 25 November 2015 Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/

Abstract

Based on the theory of fractional calculus, the contraction mapping principle, Krasnoselskii fixed point theorem and the inequality technique, a class of Caputo fractional-order BAM neural net-works with delays in the leakage terms is investigated in this paper. Some new sufficient condi-tions are established to guarantee the existence and uniqueness of the nontrivial solution. More-over, uniform stability of such networks is proposed in fixed time intervals. Finally, an illustrative example is also given to demonstrate the effectiveness of the obtained results.

Keywords

BAM Neural Networks, Caputo Fractional-Order, Existence, Fixed Point Theorems, Uniform Stability

1. Introduction

Fractional order calculus was firstly introduced 300 years ago, but it did not attract much attention for a long time since it lack of application background and the complexity. In recent decades, the study of fractional-order calculus has re-attracted tremendous attention of much researchers because it can be applied to physics, applied mathematics and engineering [1]-[6]. We know that the fractional-order derivative is nonlocal and has weakly singular kernels. It provides an excellent instrument for the description of memory and hereditary properties of dynamical processes where such effects are neglected or difficult to describe to the integer order models.

(2)

states of neurons. Moreover, the superiority of the Caputo’s fractional derivative is that the initial conditions for fractional differential equations with Caputo derivatives take on the similar form as those for integer-order dif-ferentiation. Therefore, it is necessary and interesting to study fractional-order neural networks both in theory and in applications.

Recently, some important and interesting results on fractional-order neural networks have been obtained and various issues have been investigated [7]-[14] by many authors. In [11], the authors proposed a fractional-order Hopfield neural network and investigated its stability by using energy function. In [12], the authors investigated stability, multistability, bifurcations, and chaos for fractional-order Hopfield neural networks. In [13], Chen et al. obtained a sufficient condition for uniform stability of a class of fractional-order delayed neural networks. In

[14], we investigated the finite-time stability for Caputo fractional-order BAM neural networks with distributed delay and established a delay-dependent stability criterion by using the theory of fractional calculus and genera-lized Gronwall-Bellman inequality approach. In [15], Song and Cao considered the existence, uniqueness of the nontrivial solution and also uniform stability for a class of neural networks with a fractional-order derivative, by using the contraction mapping principle, Krasnoselskii fixed point theorem and the inequality technique.

The integer-order bidirectional associative memory (BAM) neural networks models, first proposed and stu-died by Kosko [16]. This neural network has been widely studied due to its promising potential for applications in pattern recognition and automatic control. In recent years, integer-order BAM neural networks have been ex-tensively studied [17]-[21]. Recently, some authors considered the uniform stability of delayed neural networks; for example, see [22]-[24] and references therein. However, to the best of our knowledge, there are few results on the uniform stability analysis of fractional-order BAM neural networks.

Motivated by the above-mentioned works, this paper considers the uniform stability of a class of fraction-al-order BAM neural networks with delays in the leakage terms described by

( )

(

)

(

( )

)

( )

(

)

(

( )

)

1

1

, 0, 1, ,

, 0, 1, , ,

m C

i i i ij j j i

j n C

j j j ji i i j

i

D x t c x t a f y t I t i n

D y t d y t b g x t J t j m

α

β

σ

σ

=

=

= − + + =

 

= − + + =



(1)

where 0<α β, <1, C

Dα and C

Dβ denote the Caputo fractional derivative of order ,α β, respectively;

( )

i

x t

(

i=1,,n

)

and yj

( )

t

(

j=1,,m

)

are the activations of the ith neuron in the neural field Fx and the jth neuron in in the neural field Fy at time t, respectively; fj

(

y tj

( )

)

denotes the activation function of the jth neuron from the neural field Fy at time t and g x ti

(

i

( )

)

denotes the activation function of the ith neu-ron from the neural field Fx at time t; Ii and Jj are constants, which denote the external inputs on the ith neuron from Fx and the jth neuron from Fy, respectively; the positive constants ci and dj denote the rates with which the ith neuron from the neural field Fx and the jth neuron from the neural field Fy will reset their potential to the resting state in isolation when disconnected from the networks and external inputs, respec-tively; the constants aij and bji represent the connection strengths; the nonnegative constant σ denotes the leakage delay.

This paper is organized as follows. In Section 2, some definitions of fractional-order calculus and some ne-cessary lemmas are given. In Section 3, some new sufficient conditions to ensure the existence, uniqueness of the nontrivial solution and also uniform stability of the fractional-order BAM neural networks 1 is obtained. Fi-nally, an example is presented to manifest the effectiveness of our theoretical results in Section 4.

2. Preliminaries

For the convenience of the reader, we first briefly recall some definitions of fractional calculus, for more details, see [1][2][5], for example.

Definition 1. The Riemann-Liouville fractional integral of order

α

>0 of a function u: 0,

(

∞ →

)

R is given by

( )

( ) (

)

1

( )

0 1

d

t

I u tα t s α u s s

α

= −

Γ

(3)

Definition 2. The Caputo fractional derivative of order γ>0 of a function u: 0,

(

∞ →

)

R can be written as

( )

(

)

( )

( )

(

)

1 0

1

d , 1 .

n t C

n

u s

D u t s n n

n t s

γ

γ γ

γ + −

= − < <

Γ −

Let X =

{

x x=

(

x x1, 2,,xn

)

T,xiC

(

[ ]

0,T ,

)

}

, Y =

{

y y=

(

y y1, 2,,ym

)

T,yjC

(

[ ]

0,T ,

)

}

. For p, q > 1, we know that X is a Banach space with the norm

(

( )

)

1

0 1

sup

p p n

i t T i p

x = ≤ ≤

= x t , and Y is a Banach space

with the norm

(

( )

)

1

0 1

sup

q q m

j t T j q

y = ≤ ≤

= y t . It is easy to see that X×Y is a Banach space with the norm

(

,

)

p q

x y = x + y .

The initial conditions associated with system (1) are of the form

( )

( )

,

( )

( )

,

[

, 0 ,

]

1, , , 1, , ,

i i j j

x θ =ϕ θ y θ =ψ θ θ∈ −σ i=  n j=  m (2) where

ϕ ψ

i, jC

(

[

σ

, 0 ,

]

)

.

To prove our results, the following lemmas are needed.

Lemma 1. ([25]). Let

α

>0, then the fractional differential equation

( )

( )

,

CD x tα =h t

has solutions

( )

2 1

0 1 2 1

( ) n ,

n x t =I h tα +c +c t+c t ++c t

where ci∈, n=

[ ]

α +1.

Lemma 2. ([26]). Let D be a closed convex and nonempty subset of a Banach space X. Let φ1, φ2 be the operators such that

1) φ1x2yD wherever x y, ∈D; 2) φ1 is compact and continuous; 3) φ2 is a contraction mapping.

Then, there exists xD such that φ1x2x=x.

In order to obtain main result, we make the following assumptions.

(H1) The neurons activation functions fi and gj are Lipschitz continuous, that is, there exist positive con- stants L li, j (i=1,,n, j=1,,m) such that

( )

( )

,

( )

( )

, , .

i i i j j j

f uf vL uv g ug vl uvu v∈

(H2) For i=1,,n, j=1,,m, there exist M M1, 2>0 such that for u v, ∈, f ui

( )

M1 and

( )

2

j

g vM .

3. Main Results

For convenience, let

0 0 0 0

1 1 1 1

max , max , , ,

m n

i j i ij j ji

i n j m j i

c c d d a a b b

≤ ≤ ≤ ≤ = =

= = =

=

(3)

0 0 0 0

1 1 1 1

max i , max j , max i, max ,j

i n j m i n j m

I I J J L L l l

≤ ≤ ≤ ≤ ≤ ≤ ≤ ≤

= = = = (4)

( )1 ( )1 ( 1) ( 1)

1 1

, .

q q p p

m q q n p p

i ij j ji

j i

a b

η ξ

− −

− −

= =

   

=  = 

 

(5)

(4)

(

) (

)

(

)

(

) (

)

(

)

1 1

1 1

0 0 0 0

1 1

max , 1.

1 1 1 1

q p

p m q n

q p

j i

j i

c n l T d m L T

B T T

β α

α β

σ ξ σ η

α β = β α =

 

= − + − + <

Γ + Γ +   Γ + Γ +   

 

(6)

Proof. Define

(

F G,

)

:X× →Y X×Y as

(

)( )( )

(

( )( )

( )( ) ( )( )

( )( )

)

T

1 1

, , , , , n , , , , , m , ,

F G x y t = F x y tF x y t G x y tG x y t

where

( )( )

( )

(

( )

)

1

(

)

(

( )

)

0

1

, 0 d ,

m t

i i i i ij j j i

j

t s

F x y t c x s a f y s I s

α

φ

σ

α

=

−  

= + − − + +

Γ

( )( )

( )

(

)

( )

(

)

(

( )

)

1

0

1

, 0 d .

n t

j j j j ji i i j

i

t s

G x y t d y s b g x s J s

β

ψ

σ

β

=

−  

= + − − + +

Γ 

By Lemma 1, we know that the fixed point of (F, G) is a solution of system (1) with initial conditions (2). In the following, we will using the contraction mapping principle to prove that the operator (F, G) has a unique fixed point.

Firstly, we prove

(

F G B,

)

δ ⊂Bδ, where Bδ =

{

( )

x y, ∈ ×X Y:

( )

x y, ≤δ

}

. Set

(

1

) (

,

)

, 1

A C

B

φ ψ

δ ≥ + +

− (7)

where

(

) (

)

1 1

0 0

max , ,

1 1

p q

c n d m

A

α β

σ σ

α β

 

 

=  

Γ + Γ +

 

  (8)

and

(

)

(

)

(

)

(

)

1 1

1 1

0 0 0 0

0 0

1 1

.

1 1 1 1

q p

p q

p q

i j

i j

I n T J m T f T g T

C a b

α β α β

α β α = β =

 

 

=Γ + +Γ + +Γ +   +Γ +  

 

 (9)

By Minkowski inequality, we have

( )

( )

(

( )

)

(

)

(

( )

)

( )

(

( )

)

(

)

(

)

( )

( )

1 1

0

0 1 1

1 1

1

0 0

1 1

1

0

0 1 1

, sup 0 d

0 sup d

0 d sup

p p

n t m

i i i ij j j i

p

t T i j

p p p

n n t

p i

i i

t T

i i

p

n m

t ij

j t T i j

t s

F x y c x s a f y s I s

c t s

x s s

a t s

f s

α

α

α

φ σ

α

φ σ

α

α

≤ ≤ = =

≤ ≤

= =

≤ ≤ = =

 

=  + Γ − − + +  

 

 

 

 

+ −

 

 Γ 

 

 

 

+

 Γ 

 

∑ ∫

∑ ∑

(

)

( )

(

( )

)

( )

(

)

( )

1

1 1

0

0 1 1

1 1

0

0 1

0 d sup

d .

sup

p

p p

n m

t ij

j j j

t T i j

p p n t

i t T i

a t s

f y s f s

t s

I s

α

α α

α

≤ ≤ = =

≤ ≤ =

 

 



+

 Γ 

 

 

 

+

 

Γ

 

∑ ∑

∑ ∫

(10)

(5)

(

)

( )

(

)

( )

(

(

)

(

)

)

( )

(

(

)

(

)

)

( )

(

(

)

( )

)

( )

1 1 0 0 1 1 1 1 1 0 0 0

0 1 0 1

1

0 1

0 0

0 1 0 1

d sup d d sup sup d sup sup p p n t i i t T i

p p

n p n t p

i i

t T i t T i

p

n p n

i

t T i t T i

c t s

x s s

c c

t s s s t s x s s

c c

t

α

σ α α

σ

α σ

σ α

φ σ σ

α α

σ θ φ θ θ

α α − ≤ ≤ = − − ≤ ≤ = ≤ ≤ = − − ≤ ≤ = ≤ ≤ =      

Γ

 

   

− − + − −

Γ   Γ  

 

= − − +

Γ   Γ

∑ ∫

(

(

)

( )

)

( )

( )

( )

( )

(

)

(

)

1 1 0 1 1

1 1 1 1

0 0

0

0 1 0 1 0

1 0

d

d d

sup sup sup sup

( ) . 1 p p t i p p n n t t

p p p p

i t i

t T t T t T

i i

p

p p

t x

c c

n x t n

c n T x σ α δ α α σ

σ θ σ

α α

σ θ θ θ

φ θ γ γ γ γ

α α

σ φ σ

α − − − − − − − ≤ ≤ = ≤ ≤ ≤ ≤ = ≤ ≤  − −          ≤ ⋅ +

Γ   Γ  

= + −

Γ +

(11)

Similar to (11) and the proof of Theorem 1 in [15], we have by (H1), (4) and (5) that

(

)

( )

(

)

1 1 1 0 0 0 1 d , sup 1 p p p n t i t T i

t s I n T

I s α α α α − ≤ ≤ =     

Γ  Γ +

∑ ∫

 (12)

(

)

( )

( )

(

)

1 1 1 0 0 0

0 1 1 1

0 d ,

sup

1

p p

p n t m

ij p

j i

t T i j i

a t s f T

f s a

α α α α − ≤ ≤ = = =       

Γ Γ + 

 

 

 

∑ ∑

(13)

and

(

)

( )

(

( )

)

( )

(

)

( )

( )

(

)

( )

( ) ( )

( )

1 1 1 1 0 0

0 1 1 0 1 1

1 1 1

0

0 1 1 1

0 d d

sup sup

sup

p p

p p

n tm n tm

ij ij

j j j j j

t T i j t T i j

q q q q

n t m m q

ij j

j

t T i j j

a t s a t s

f y s f s L y s s

a L t s

y s α α α α α α − − ≤ ≤ = = ≤ ≤ = = − − − ≤ ≤ = = =     

Γ   Γ

                  ≤ ⋅

 Γ 

  

∑ ∑

∑ ∑

(

)

1 1 1 0 1 d . 1 p p q p n p i q i L T s y α η α =     ≤  

   Γ +  

             

(14)

Substitute (11)-(14) into (10), we get

( )

(

)

(

)

(

)

(

(

)

)

(

)

1 1 0 0 0 1 1 1 0 0 1 , 1 1 . 1 1 p p p i p p i p p n p i

p p q

i

I n T f T

F x y a

c n L T

T x y

α α α α α φ α α

σ φ σ η

α α

=

=

 

≤ + +

Γ + Γ + 

 

+ + − +

Γ + Γ + 

(15)

Similarly, we obtain

( )

(

)

(

)

(

)

(

)

(

)

1 1 0 0 0 1 1 1 0 0 1 , 1 1 ( ) . 1 1 q q q j q q j q q m q j

q p p

j

J m T g T

G x y b

d m l T

T y x

β β

β

β β

ψ

β β

σ ψ σ ξ

β β

=

=

 

≤ + +

Γ + Γ + 

 

+ + − +

Γ + Γ + 

(16)

(6)

(

)(

)

(

)

(

)

(

) (

)

(

)

(

)

(

)

(

)

(

)

(

) (

)

1 1 0 0 1 1 0 0 0 0 1 1

, , , , 1 ,

1 1

,

1 1

1 , .

p q p q q p p q i j i j

I n T J m T

F G x y F x y G x y A

f T g T

a b B x y

A C B

α β

α β

φ ψ

α β

α β

φ ψ δ δ

= =

= + ≤ + + +

Γ + Γ +

 

 

+ + +

Γ +   Γ +  

≤ + + + ≤

Secondly, we prove that

(

F G,

)

:X× →Y X×Y is a contraction mapping. Let

(

x y,

) ( )

, u v, ∈ ×X Y, similar to the above process, we get

(

)( ) (

)( )

( )

( )

( )

( )

(

)

( )

(

)

(

)

(

(

( )

)

(

( )

)

)

(

)

( )

(

)

(

)

(

(

( )

)

(

( )

)

)

1 1 0

0 1 1

1

0

0 1 1

, , , , , , , , d sup d sup p q p p n m t

i i i ij j j j j

t T i j

q

m t n

j j j ji i i i i

t T j i

F G x y F G u v F x y F u v G x y G u v

t s

c x s u s a f y s f v s s

t s

d y s v s b g x s g u s s

α β σ σ α σ σ β − ≤ ≤ = = − ≤ ≤ = = − = − + −     = −  − − − + −

Γ

 

 

 

+  Γ − − − + −

   

(

)

(

)

(

)

( )

(

)

( )

( )

( )

(

)

(

)

(

)

( )

1 1 1 0 0 1 1 1 0

0 1 1

1 1 0 0 1 0 d sup d sup d sup sup q p p n t

i i i

t T i

p p n m

t ij j

j j

t T i j

q q m t

j j j

t T j

t T j

t s c x s u s

s

t s a L

y s v s s

t s d y s v s

s α α β σ σ α α σ σ β − ≤ ≤ = − ≤ ≤ = = − ≤ ≤ = ≤ ≤      ≤

 Γ 

          +

 Γ 

          +

 Γ 

      +

∑ ∫

∑ ∑

∑ ∫

(

)

( )

( )

( )

(

) (

)

(

)

(

) (

)

(

)

( ) ( )

( ) ( )

1 1 0 1 1 1 1 1 1

0 0 0 0

1 1

d

max , , ,

1 1 1 1

, , ,

q q m t n

ji i

i i

i

q p

p m q n

q p

j i

j i

t s b l

x s u s s

c n l T d m L T

T T x y u v

B x y u v

β

β α

α β

β

σ ξ σ η

α β β α

= =

= =

Γ

              ≤ − + − +

Γ + Γ +   Γ + Γ +   

 

= −

∑ ∑

By (6), we conclude that

(

F G,

)

is a contraction mapping. It follows from the contraction mapping principle that system (1) has a unique solution. The proof is completed.

Theorem 4. Assume that (H2) holds. If there exist real numbers p q, >1 such that

(

) (

)

(

) (

)

1 1

0 0

max , 1,

1 1

p q

c n d m

T σ α T σ β

α β

 

<

Γ + Γ +

 

  (17)

then the system (1) has at least one solution on

[ ]

0,T .

Proof. Let

(

) (

)

(

)

(

)

(

)

(

)

(

)

(

)

1 1

1 1

0 0 2 1

1 1

1 1

0 0

1 ,

1 1 1 1

.

1 max ( ) , ( )

1 1

q p

p q m n

q p

j i

j i

p q

I n T J m T M T M T

A

c n d m

T T

α β β α

α β

φ ψ ξ η

α β β α

δ σ σ α β = =     + + + + +

Γ + Γ + Γ +   Γ + 

 

 

− Γ +Γ + − 

 

 

(7)

Define two operators

( )

L L, and

(

N N,

)

on Bδ as follows

( )

( )( )

(

( )( )

( )( ) ( )( )

( )( )

)

T

1 1

, , , , , n , , , , , m , ,

L L x y t = L x y tL x y t L x y tL x y t

(

)

( )( )

(

( )( )

( )( )

( )( )

( )( )

)

T

1 1

, , , , , n , , , , , m , ,

N N x y t = N x y tN x y t N x y tN x y t

where

( )( )

,

( )

0 0t

(

( )

)

1

(

)

d ,

i i i i i

t s

L x y t c x s I s

α

φ

σ

α

− − = + − − +  Γ

( )( )

0

(

( )

)

1

(

( )

)

1

, d , 1, , ,

m t

i ij j j

j

t s

N x y t a f y s s i n

α

α

− = − = = Γ

( )( )

,

( )

0 0t

(

( )

)

1

(

)

d ,

j j j j j

t s

L x y t d y s J s

β

ψ

σ

β

− − = + − − + Γ

( )( )

0

(

( )

)

1

(

( )

)

1

, d , 1, , .

n t

j ji i i

i

t s

N x y t b g x s s j m

β

β

− = − = = Γ

Firstly, we will prove

( )

L L,

( )

x y, +

(

N N,

)

( )

x y, ∈Bδ. In fact, using Minkowski inequality and (18) gives

that

( )

( )

(

)

( )

( )

(

( )

)

(

)

(

( )

)

( )

(

( )

)

(

)

(

( )

)

( )

( )

1 1 0

0 1 1

1 1

0

0 1 1

1 1

1 1

, , , ,

sup 0 d

sup 0 d

0 0

p p

n t m

i i i ij j j i

t T i j

q q

m t n

i j j ji i i j

t T j i

q p

n p m q

i i

i j

L L x y N N x y

t s

c x s a f y s I s

t s

d y s b g x s J s

α β φ σ α ψ σ β φ ψ − ≤ ≤ = = − ≤ ≤ = = = = +     = + − − + +

Γ

 

 

 

+  + − − + +

Γ  

        ≤ + +    

(

( )

)

(

)

(

)

( )

(

)

(

)

( )

(

( )

)

(

)

1 1 0 0 1 1 1 1 1 0 0

0 1 0 1 1

1

0

0 1 1

sup d

sup d sup d

sup p p n t i i t T i

p

q p

q

m t n t m

ij j

j j j

t T j t T i j

m n t ji t T j i

c t s

x s s

a t s

d t s

y s s f y s s

b t s

α α β β σ α σ β α − ≤ ≤ = − − ≤ ≤ = ≤ ≤ = = − ≤ ≤ = =      

Γ

             + +

 Γ   Γ 

            − + Γ

∑ ∫

∑ ∑

∑ ∑

( )

(

( )

)

(

( )

)

(

)

( )

(

) (

)

(

)

(

)

(

) (

)

(

) (

)

1 1 1 0 0 1 1 1 0 0 1

1 1 1 1

0 0 0 0

d sup d

sup d

1 , max ,

1 1 1 1

q p

q p

n t

i i i

t T i

q q m t

j t T j

p q p q

t s

g x s s I s

t s

J s

I n T J m T c n d m

A T T

M α β α β α β β α β

φ ψ σ σ δ

α β α β

− ≤ ≤ = − ≤ ≤ =             +    

     Γ 

   

 

+

Γ

 

 

 

≤ + + + + − −

Γ + Γ + Γ + Γ +

+

∑ ∫

∑ ∫

(

)

(

)

1 1 2 1 1 1 . 1 1 q p m n q p j i j i

Tβ ξ M Tα η δ

β = α =

  +  

   

(8)

Thus, we conclude that

( )

L L,

( )

x y, +

(

N N,

)

( )

x y, ∈Bδ. Secondly, for any

(

x u,

) (

, y v,

)

Bδ, we have

( )

( )

( ) ( )

( )

( )

(

)

( )

(

)

(

)

(

)

( )

(

)

(

)

(

) (

)

(

) (

) ( ) ( )

1 1 0 0 1 1 1 0 0 1 1 1 0 0 , , , , , , d sup d sup

max , , , ,

1 1 p q p p n t i i i

t T i

q q m t

j

j j

t T j

p q

L L x y L x y L u v L x y L x y

c t s

x s u s s

d t s

y s v s s

c n d m

T T x u y v

α β α β σ σ α σ σ β σ σ α β − ≤ ≤ = − ≤ ≤ = = − + −     ≤ − − −  

Γ

 

+ − − −

 Γ 

          ≤  − −  −

Γ + Γ +

 

 

∑ ∫

∑ ∫

which implies that

( )

L L, is a contraction mapping by (17).

Thirdly, we prove that

(

N N,

)

is continuous and compact. Since fj,gi, j=1,,m, i=1,,n, are con- tinuous, it is obvious that

(

N N,

)

is also continuous. Let

(

x y,

)

Bδ, we get by (H2) that

(

)

( )

( )

( )

(

)

( )

(

( )

)

(

)

( )

(

( )

)

(

)

(

)

1 1 1 1 0 0

0 1 1 0 1 1

1 1 2 1 1 1 , , , , d d sup sup , 1 1 p q p q p q

n tm m tn

ij ji

j j i i

t T i j t T j i

q p

m n

q p

j i

j i

N N x y N x y N x y

a t s b t s

f y s s g x s s

M T M T

α β β α α β ξ η β α − − ≤ ≤ = = ≤ ≤ = = = = = +         = +

 Γ   Γ 

   

   

   

   

+

Γ +   Γ + 

∑ ∑

∑ ∑

which implies that

(

N N,

)

is uniformly bounded on Bδ. Moreover, we can show that

(

N N,

)

( )( )

x y t, is

equicontinuous. In fact, for

(

x y,

)

Bδ, 0< <t2 t1, we obtain

(

)

( )( )

(

)

( )( )

( )( )

( )( )

( )( )

( )( )

(

)

( )

(

( )

)

(

)

( )

(

( )

)

(

)

( )

(

( )

)

(

)

( )

(

( )

)

1 2 1 2 1 2

1 2 1 2

1

1 1

1 2

0 0

1 1 1

1 1

1 2

0 0

1 1 1

, , , , , , , , d d d d p q p p

n t m t m

ij ij

j j j j

i j j

m t n t n

ji ji

i i i i

j i i

N N x y t N N x y t

N x y t N x y t N x y t N x y t

a t s a t s

f y s s f y s s

b t s b t s

g x s s g x s s

α α β β α α β β − − = = = − − = = = − = − + −     ≤

 Γ Γ 

         + −

 Γ Γ

 

∑ ∑

∑ ∑

(

)

(

)

( )

(

( )

)

(

)

(

)

( )

(

( )

)

(

)

2 1 2 2 1 2 1 1

1 1 1

2 1 1

1 0

1 1

1

1 1 1

2 1 1

2 0 1 1 2 =1 d d d d 1 q q p p

n m t t

ij t

i j

q q

m n t t

ji t

j i

m

j

t s t s t s

M a s s

t s t s t s

M b s s

M T

α α α

β β β

β α α β β ξ β − − − = = − − − = =               ≤ +  

Γ Γ

          + +

Γ Γ

      ≤ Γ +

∑ ∑

∑ ∑

1

(

)

(

1

)

1

(

)

1 2 2 1 1 2 2 1

=1

2 2 0,

1 q p n q p j i i M T

t t t t t t t t

α

β β β η α α α

α

  + +   + −

  Γ +  

 

(9)

by Lemma 2 we have that system (1) has at least one solution.

Theorem 5. Assume that (H1) and condition (6) hold. Then the solution of system (1) is uniformly stable on

[ ]

0,T .

Proof. Assume that

(

x t

( ) ( )

,y t

)

T and

(

u t

( ) ( )

,v t

)

T are any two solutions of system (1) with the initial con- ditions

(

φ ψ,

)

and

( )

φ ψ

, , respectively. Then

( ) ( )

(

)

T

(

)( )( )

(

( ) ( )

)

T

(

)( )( )

, , , , , , , ,

x t y t = F G x y t u t v t = F G u v t

that is,

( )

( )

(

( )

)

(

)

(

( )

)

( )

( )

(

( )

)

(

)

(

( )

)

1 0 1 1 0 1

0 d ,

0 d ,

m t

i i i i ij j j i

j n t

j j j j ji i i j

i

t s

x t c x s a f y s I s

t s

y t d y s b g x s J s

α β φ σ α ψ σ β − = − =  = + − − + +  Γ     −  = + + +

Γ

( )

( )

(

( )

)

(

)

(

( )

)

( )

( )

(

( )

)

(

)

(

( )

)

1 0 1 1 0 1

0 d ,

0 d ,

m t

i i i i ij j j i

j n t

j j j j ji i i j

i

t s

u t c u s a f v s I s

t s

v t d v s b g u s J s

α β φ σ α ψ σ β − = − =  = + − − + +  Γ     −  = + + +

Γ

which implies

( ) ( )

(

)

( )

(

)

(

)

(

)

( )

(

( )

)

(

( )

)

(

)

( )

(

)

(

)

1 1 0 0 1 1 1 0

0 1 1

1 0 0 1 , , d sup d sup d sup p q p p n t i i i p

t T i

p p n t m

ij

j j j j

t T i j

m t j

j j

q

t T j

x y u v x u y v

c t s

x s u s s

a t s

f y s f v s s

d t s

y s v s s

α

α

β

φ φ σ σ

α

α

ψ ψ σ σ

β − ≤ ≤ = − ≤ ≤ = = − ≤ ≤ = − = − + −     ≤ − + − − −  

Γ

 

+

 Γ 

          + − + − − −  Γ  

∑ ∫

∑ ∑

∑ ∫

(

)

( )

(

( )

)

(

( )

)

(

)

( )

(

) (

)

(

)

(

)

(

)

(

)

1 1 1 0

0 1 1

1 1 0 0 1 1 1 0 0 1 d sup , , 1 1 ( ) 1 1 , , q q q q m t n

ji

i i i i

t T j i

q p m q j p j p q n p i q i

b t s

g x s g u s s

c n l T

T x u

d m L T

T y v

β β α α β β

φ ψ φ ψ σ ξ

α β

σ η

β α

φ ψ φ

− ≤ ≤ = = = =                +

 Γ 

          ≤ − + − +

Γ + Γ + 

 

+ − +

Γ + Γ + 

 

≤ −

∑ ∑

( )

ψ +B x u

( ) ( )

, − y v, . Hence, we have

( ) ( )

1

(

)

( )

, , , , .

1

x y u v

B φ ψ φ ψ

− ≤ −

− (19)

(10)

( ) ( )

x y, − u v, <

ε

,

which implies that the solution of system (1) is uniformly stable on

[ ]

0,T .

4. An Illustrative Example

In this section, we give an example to illustrate the effectiveness of our main results.

Consider the following two-state Caputo fractional BAM type neural networks model with leakage delay

( )

(

)

(

( )

)

(

( )

)

( )

(

)

(

( )

)

(

( )

)

( )

(

)

(

( )

)

(

( )

)

( )

(

)

(

( )

)

(

( )

)

1 1 1 1 2 2

2 2 1 1 2 2

1 1 1 1 2 2

2 2 1 1 2 2

0.3 0.2 0.1 0.4,

0.2 0.3 0.2 0.3,

0.4 0.4 0.2 0.5,

0.5 0.1 0.3 0.2

C C C C

D x t x t f y t f y t

D x t x t f y t f y t

D y t y t g x t g x t

D y t y t g x t g x t

α α β β

σ

σ

σ

σ

 = − − − + +

= − − + + −

= − + +

= − + +

(20)

with the initial condition

( )

( )

( )

( )

( )

( )

( )

( )

[

]

1 1 , 2 2 , 1 1 , 2 2 , , 0 ,

x tt x tt y tt y tt t∈ −σ

where

φ ψ

i, iC

(

[

σ

, 0 ,

]

)

, i=1, 2, α =0.8,β =0.9,

σ

=0.3, n= =m 2,

( )

(

)

1

1 1

2

j j j j

f x = x + − x − ,

( )

tanh

( )

i i i

g x = x

(

i j, =1, 2

)

. Let T=1, p= =q 2, from (3)-(5), it is easy to check that

0 0.3, 0 0.5, 10 0.3, 20 0.5, 10 0.6, 10 0.4,

c = d = a = a = b = b =

2 2 2 2

0 0 1, 1 11 12 0.2236, 2 21 22 0.3606,

L = =l η = a +a = η = a +a =

2 2 2 2

1 b11 b12 0.4472, 2 b21 b22 0.3162.

ξ = + = ξ = + =

Thus,

(

)

(

)

(

)

(

)

( )

( )

( )

( )

1 1

1 1

0 0 0 0

1 1

0.8 2 2 0.9 2 2

max ( ) , ( )

1 1 1 1

0.3 2 0.7 0.4472 0.3162 0.5 2 0.7 0.2236 0.3606

max ,

1.8 1.9 1.9 1.9

0.9745 1,

q p

p m q n

q p

j i

j i

c n l T d m L T

B T T

β α

α β

σ ξ σ η

α β = β α =

     

 

= − + − +

Γ + Γ +   Γ + Γ +   

 

× × + × × +

 

=  Γ + Γ Γ + Γ

 

 

= <

that is, condition (6) holds. By utilizing Theorems 3.1 and 3.3, we can obtain that the system (20) has a unique solution which is uniformly stable on

[ ]

0,1 .

In the following, we show the simulation result for model (20). We consider four cases: Case 1 with the initial values

( ) ( ) ( )

( )

(

)

T

(

)

T

1 t , 2 t , 1 t , 2 t 0.3,1.5, 0.9, 0.8

φ φ ψ ψ ≡ − − for t∈ −

[

0.3, 0

]

, Case 2 with the initial values

( ) ( ) ( )

( )

(

)

T

(

)

T

1 t , 2 t , 1 t , 2 t 1.2, 0.9,1.4, 1.3

φ φ ψ ψ ≡ − − for t∈ −

[

0.3, 0

]

, Case 3 with the initial values

( ) ( ) ( )

( )

(

)

T

(

)

T

1 t , 2 t , 1 t , 2 t 0.9,1.6, 0.7,1.5

φ φ ψ ψ ≡ for t∈ −

[

0.3, 0

]

,

Case 4 with the initial values

( ) ( ) ( )

( )

(

)

T

(

)

T

1 t , 2 t , 1 t , 2 t 0.5, 1.3, 1.9, 2.2

(11)

(a) (b)

[image:11.595.71.542.78.461.2]

(c) (d)

Figure 1.Transient states of the fractional-order BAM neural networks (20) with α =0.8, β =0.9, and σ =0.3.

Acknowledgements

This work is supported by Natural Science Foundation of Jiangsu Province (BK2011407) and Natural Science Foundation of China (11571136 and 11271364).

References

[1] Miller, K.S. and Ross, B. (1993) An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley & Sons, New York.

[2] Podlubny, I. (1999) Fractional Differential Equations. Academic Press, San Diego.

[3] Soczkiewicz, E. (2002) Application of Fractional Calculus in the Theory of Viscoelasticity. Molecular and Quantum Acoustics, 23, 397-404.

[4] Kulish, V. and Lage, J. (2002) Application of Fractional Calculus to Fluid Mechanics. Journal of Fluids Engineering, 124, 803-806. http://dx.doi.org/10.1115/1.1478062

[5] Kilbas, A.A., Srivastava, H.M. and Trujillo, J.J. (2006) Theory and Applications of Fractional Differential Equations.

North-Holland Mathematics Studies, Vol. 24, Elsevier Science B.V., Amsterdam.

[6] Sabatier, J., Agrawal, Om.P. and Machado, J. (2007) Theoretical Developments and Applications. Advance in Frac-tional Calculus. Springer, Berlin.

(12)

http://dx.doi.org/10.1142/S0218127498001170

[8] Arena, P., Fortuna, L. and Porto, D. (2000) Chaotic Behavior in Noninteger Order Cellular Neural Networks. Physical Review E, 61, 776-781. http://dx.doi.org/10.1103/PhysRevE.61.776

[9] Yu, J., Hu, C. and Jiang, H. (2012) α-Stability and α-Synchronization for Fractional-Order Neural Networks. Neural Networks, 35, 82-87. http://dx.doi.org/10.1016/j.neunet.2012.07.009

[10] Wu, R., Hei, X. and Chen, L. (2013) Finite-Time Stability of Fractional-Order Neural Networks with Delay. Commu-nications in Theoretical Physics, 60, 189-193. http://dx.doi.org/10.1088/0253-6102/60/2/08

[11] Boroomand, A. and Menhaj, M.B. (2009) Fractional-Order Hopfield Neural Networks. In: Köppen, M., Kasabov, N. and Coghill, G., Eds., Advances in Neuro-Information Processing, Springer, Berlin, 883-890.

[12] Kaslik, E. and Sivasundaram, S. (2012) Nonlinear Dynamics and Chaos in Fractional-Order Neural Networks. Neural Networks, 32, 245-256. http://dx.doi.org/10.1016/j.neunet.2012.02.030

[13] Chen, L., Chai, Y., Wu, R., Ma, T. and Zhai, H. (2013) Dynamic Analysis of a Class of Fractional-Order Neural Net-works with Delay. Neurocomputing, 111, 190-194. http://dx.doi.org/10.1016/j.neucom.2012.11.034

[14] Cao, Y. and Bai, C. (2014) Finite-Time Stability of Fractional-Order BAM Neural Networks with Distributed Delay.

Abstract and Applied Analysis, 2014, Article ID: 634803.

[15] Song, C. and Cao, J. (2014) Dynamics in Fractional-Order Neural Networks. Neurocomputing, 142, 494-498.

http://dx.doi.org/10.1016/j.neucom.2014.03.047

[16] Kosto, B. (1988) Bi-Directional Associative Memories. IEEE Transactions on Systems, Man, and Cybernetics, 18, 49- 60. http://dx.doi.org/10.1109/21.87054

[17] Cao, J. and Wang, L. (2002) Exponential Stability and Periodic Oscillatory Solution in BAM Networks with Delays.

IEEE Transactions on Neural Networks, 13, 457-463. http://dx.doi.org/10.1109/72.991431

[18] Arik, S. and Tavsanoglu, V. (2005) Global Asymptotic Stability Analysis of Bidirectional Associative Memory Neural Networks with Constant Time Delays. Neurocomputing, 68, 161-176.

http://dx.doi.org/10.1016/j.neucom.2004.12.002

[19] Park, J. (2006) A Novel Criterion for Global Asymptotic Stability of BAM Neural Networks with Time Delays. Chaos,

Solitons and Fractals, 29, 446-453. http://dx.doi.org/10.1016/j.chaos.2005.08.018

[20] Bai, C. (2008) Stability Analysis of Cohen-Grossberg BAM Neural Networks with Delays and Impulses. Chaos, Soli-tons and Fractals, 35, 263-267. http://dx.doi.org/10.1016/j.chaos.2006.05.043

[21] Raja, R. and Anthoni, S.M. (2011) Global Exponential Stability of BAM Neural Networks with Time-Varying Delays: The Discrete-Time Case. Communications in Nonlinear Science and Numerical Simulation, 16, 613-622.

http://dx.doi.org/10.1016/j.cnsns.2010.04.022

[22] Meyer-Base, A., Roberts, R. and Thummler, V. (2010) Local Uniform Stability of Competitive Neural Networks with Different Timescales under Vanishing Perturbations. Neurocomputing, 73, 770-775.

http://dx.doi.org/10.1016/j.neucom.2009.10.003

[23] Arbi, A., Aouiti, C. and Touati, A. (2012) Uniform Asymptotic Stability and Global Asymptotic Stability for Time- Delay Hopfield Neural Networks. IFIP Advances in Information and Communication Technology, 381, 483-492.

http://dx.doi.org/10.1007/978-3-642-33409-2_50

[24] Rakkiyappan, R., Cao, J. and Velmurugan, G. (2014) Existence and Uniform Stability Analysis of Fractional-Order Complex-Valued Neural Networks with Time Delays. IEEE Transactions on Neural Networks and Learning Systems, 26, 84-97.

[25] Zhang, S. (2006) Positive Solutions for Boundary Value Problems of Nonlinear Fractional Differential Equations.

Electronic Journal of Differential Equations, 2006, 1-12.

Figure

Figure 1. Transient states of the fractional-order BAM neural networks (20) with α =0.8, β =0.9, and σ =0.3

References

Related documents

Expressing the spheroidal function involved in the integrand in its expression form with the help of (1.25) and the Bessel serie, the I-function of r variables in series with the

ErbB4 Regulates Cortical Interneuron Directionality and Morphology via Cdk5 and PI3-Kinase Pathways To test the function of phosphorylated ErbB4 in tangential migration of

The results indicated that liquid corn gluten at level of 5% had positive effect on body weight gain and feed conversion ratio, but reduced feed intake during

operators algebra, linear canonical transformation and a phase space representation of quantum 14.. mechanics that we have introduced and studied in

These papers have been highly criticized by the literature initiated by IMF with Guajardo, Leigh and Pescatori (2010). These authors, using a narrative approach to identify episodes

This includes: (a) creating a non-accredited professional identity of a “positive psychology practitioner” for graduates of MAPP courses (as well as hybrid MAPP courses,

The research found that only 22% Websites of the online banking services in USA and European Union achieved the minimum Level A conformance of the guidelines of WAI WCAG 1.0.

According to the statistical report the chief barrier to female education are inadequate school facilities (such as sanitary facilities), shortage of female teachers